PinnedPublished inGeek CultureRule-based Reasoning in LLMs via Stepwise RefinementRemember learning to program? My teacher, and ardent disciple of Niklaus Wirth, taught us the principle of stepwise refinement as we worked…May 5May 5
PinnedDistilling DistillationQuestion: Is 2024 the year of slimming down LLMs into lithe and sprightly SLMs? SLM, of course stands for Small Language Model, and aptly…Feb 4Feb 4
PinnedSELF-INSTRUCT, the Low-cost LLM Alignment ProcessThis post is an easy-to-digest explanation of the seminal SELF-INSTRUCT paper that led to another influential work, Stanford Alpaca.May 8, 20231May 8, 20231
PinnedSpend Tokens to Make TokensThe old adage, you have to spend money to make money can bear fruit if you know what to spend it on. In the case of ChatGPT, you have to…Mar 20, 2023Mar 20, 2023
PinnedChatGPT’s Brush with DeceptionGandhi, the storied leader of India’s fight for freedom, logged his life experiments with truth and that inspired me to log ChatGPT’s…Dec 11, 2022Dec 11, 2022
Published inGeek CultureExperiments inText-to-Verilog with ChatGPT-4oThis post logs my experiments with ChatGPT-4o with regards to taking logic expressed in natural language and translating it to Verilog…Aug 17Aug 17
“Where’s the Beef”, Codestral’s Fill-In-the-Middle MagicFill-in-the-Middle (FIM) is the ability of an LLM to generate the middle tokens sandwiched between (supplied) prefix and suffix tokens. To…Jun 4Jun 4
Distilling with LLM-Generated Rationales Yields Outperformance in Task-Specific Fine-tuning!Large Language Models are challenging to serve in practice making implementers gravitate towards distilled models. Distillation yields…May 28, 2023May 28, 2023
Astronauts Riding Horses? That’s Old. Daydream your way to riding a Unicorn.Here’s a jargon-free way to explain Dreambooth so use can dream-up hitherto unimagined use-cases. We call our app DayDream, a lightweight…Nov 14, 2022Nov 14, 2022