Open in app

Sign In

Write

Sign In

Auro Tripathy
Auro Tripathy

69 Followers

Home

About

Pinned

ChatGPT’s Brush with Deception

Gandhi, the storied leader of India’s fight for freedom, logged his life experiments with truth and that inspired me to log ChatGPT’s understanding of deceptive chatter. Initially, ChatGPT was shy about talking about the topic of deception, so I insisted on it ‘must follow’ my guidelines. The sample sentence is…

Chatgpt

2 min read

ChatGPT’s Brush with Deception
ChatGPT’s Brush with Deception
Chatgpt

2 min read


Published in MLearning.ai

·Nov 14, 2022

Astronauts on Horses? Been There? Daydream yourself into riding Unicorns

The current crop of text-to-image generators are awe-inspiring yet generic, i.e., you cannot insert your personalization into the generated image. That changed with Dreambooth. Below is a jargon-free explanation of Dreambooth usage, so you can dream up hitherto impossible use cases. We do that with our app DayDream, a lightweight…

Stable Diffusion

2 min read

Astronauts Riding Horses? That’s Old. Daydream your way to riding a Unicorn.
Astronauts Riding Horses? That’s Old. Daydream your way to riding a Unicorn.
Stable Diffusion

2 min read


Published in MLearning.ai

·Oct 27, 2022

GPT3 does Dishes? No, use it to Query your Dishwasher Repair Manual

Oh, have I got your attention now? We’re on a mission to retire the phrase RTFM and coin something new; QTM (Query the Manual). For the Impatient… Arm yourself with an OpenAP key and head over to my GitHub repo and play the notebook. You should be able to reproduce my results.

Gpt 3

3 min read

GPT3 does Dishes? No, use it to Query your Dishwasher Repair Manual
GPT3 does Dishes? No, use it to Query your Dishwasher Repair Manual
Gpt 3

3 min read


Published in Geek Culture

·Oct 10, 2022

Long for Symbolic Processing? Meanwhile, get to know your Tokenizer

The lively debate about augmenting machine Learning with symbolic processing rages on. Until we get a break-thru, we must live with tokenizing the input text and converting them into numeric identifiers (token-ids). …

NLP

3 min read

Long for Symbolic Processing? Meanwhile, get to know your Tokenizer
Long for Symbolic Processing? Meanwhile, get to know your Tokenizer
NLP

3 min read


Published in MLearning.ai

·Jan 10, 2022

A BERT Flavor to Sample and Savor

Thankfully, we can derive a variety of models from the BERT architecture to fit our memory and latency needs. Turns out, model capacity (the number of parameters) is factored on three variables, the number of layers, the hidden embedding size, and the number of attention heads. This post puts the…

Bert

2 min read

A BERT Flavor to Sample and Savor
A BERT Flavor to Sample and Savor
Bert

2 min read


Published in MLearning.ai

·Dec 27, 2021

The Unreasonable Effectiveness of Training With Jitter (i.e, How to Reduce Overfitting)

In many scenarios where we’re learning from a small dataset, an over-fitted model is a likely outcome. By that we mean, the model may perform OK on the training data but does not generalize very well to test data. In this post, we highlight a simple yet powerful way to…

Machine Learning

3 min read

The Unreasonable Effectiveness of Training With Jitter (i.e, How to Reduce Overfitting)
The Unreasonable Effectiveness of Training With Jitter (i.e, How to Reduce Overfitting)
Machine Learning

3 min read


Published in MLearning.ai

·Nov 7, 2021

NLP, Riches in the Niches

NLP model training has achieved some degree of generalization; however, nuggets of value lie concealed in simple and affordable fine-tuning for specific tasks. A concrete example follows. Transformer based NLP are, well, transformational! There’s considerable excitement about openly available NLP models that are trained at scale on vast swaths of…

Naturallanguageprocessing

3 min read

NLP, Riches in the Niches
NLP, Riches in the Niches
Naturallanguageprocessing

3 min read


Published in MLearning.ai

·Jun 17, 2021

Demystifying the Attention Building Block in Transformer Networks

The Importance of Attention Language modeling tasks like answering questions and classifying documents are now designed with transformer networks. The attention building block is a central component of the transformer. Recently, transformer-based computer vision models have attained state-of-the-art results (further underscoring the importance of attention). …

Machine Learning

4 min read

Demystifying the Attention Building Block in Transformer Networks
Demystifying the Attention Building Block in Transformer Networks
Machine Learning

4 min read


Published in Geek Culture

·Jun 9, 2021

Approaches to Biomedical Text Mining with BERT

I highlight ways to lower the barrier to entry into biomedical text processing and speed up progress in this vitally important area impacting mankind. Spotlighting the Challenge Medical documents have grown to the extent that, PubMed, the search engine and repository for biomedical research articles, adds 4,000 new papers every day and over…

Bert

5 min read

Approaches to Biomedical Text Mining with BERT
Approaches to Biomedical Text Mining with BERT
Bert

5 min read


Feb 5, 2020

Writing a Custom Layer in PyTorch

Background By now you may have come across the position paper, PyTorch: An Imperative Style, High-Performance Deep Learning Library presented at the 2019 Neural Information Processing Systems. The paper promotes PyTorch as a Deep Learning framework that balances usability with pragmatic performance (sacrificing neither). Read the paper and judge for yourself. …

3 min read

3 min read

Auro Tripathy

Auro Tripathy

69 Followers

Model smith, code smith, wordsmith https://www.linkedin.com/in/aurotripathy/

Following
  • AI2

    AI2

  • Pinterest Engineering

    Pinterest Engineering

  • Seungjun (Josh) Kim

    Seungjun (Josh) Kim

  • PyTorch Geometric

    PyTorch Geometric

  • Keru Chen

    Keru Chen

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech