PinnedRule-based Reasoning in LLMs via Stepwise RefinementRemember learning to program? My teacher, and ardent disciple of Niklaus Wirth, taught us the principle of stepwise refinement as we worked…May 5, 202450May 5, 202450
PinnedDistilling DistillationQuestion: Is 2024 the year of slimming down LLMs into lithe and sprightly SLMs? SLM, of course stands for Small Language Model, and aptly…Feb 4, 20248Feb 4, 20248
PinnedSELF-INSTRUCT, the Low-cost LLM Alignment ProcessThis post is an easy-to-digest explanation of the seminal SELF-INSTRUCT paper that led to another influential work, Stanford Alpaca.May 8, 2023151May 8, 2023151
PinnedSpend Tokens to Make TokensThe old adage, you have to spend money to make money can bear fruit if you know what to spend it on. In the case of ChatGPT, you have to…Mar 20, 20238Mar 20, 20238
PinnedChatGPT’s Brush with DeceptionGandhi, the storied leader of India’s fight for freedom, logged his life experiments with truth and that inspired me to log ChatGPT’s…Dec 11, 202238Dec 11, 202238
Perplexity (the Metric) IlluminatedPerplexity, the metric, measures a language model’s ability to predict the next word in a sequence. Its a measure of how perplexed (or…Feb 118Feb 118
Experiments in Text-to-Verilog with ChatGPT-4oThis post logs my experiments with ChatGPT-4o with regards to taking logic expressed in natural language and translating it to Verilog…Aug 17, 20246Aug 17, 20246
“Where’s the Beef”, Codestral’s Fill-In-the-Middle MagicFill-in-the-Middle (FIM) is the ability of an LLM to generate the middle tokens sandwiched between (supplied) prefix and suffix tokens. To…Jun 4, 20242Jun 4, 20242