You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
vignansai
saivig
ML @Sprinklr |
Dual Degree(BTech + MTech) in CSE, IIT Delhi
Setting up a Github and Bitbucket account on the same computer on Mac OS. Now with a guide for Windows 10.
Setting up github and bitbucket on the same computer (Windows)
Guide for Windows
mix3d asked for some help using this guide with windows so here we go. This was tested with Windows 10. Run all commands in Git Bash once it's installed.
Github will be the main account and bitbucket the secondary.
When working with Git, there are two prevailing workflows are Git workflow and feature branches. IMHO, being more of a subscriber to continuous integration, I feel that the feature branch workflow is better suited, and the focus of this article.
If you are new to Git and Git-workflows, I suggest reading the atlassian.com Git Workflow article in addition to this as there is more detail there than presented here.
I admit, using Bash in the command line with the standard configuration leaves a bit to be desired when it comes to awareness of state. A tool that I suggest using follows these instructions on setting up GIT Bash autocompletion. This tool will assist you to better visualize the state of a branc
This is my technical interview cheat sheet. Feel free to fork it or do whatever you want with it. PLEASE let me know if there are any errors or if anything crucial is missing. I will add more links soon.
ANNOUNCEMENT
I have moved this over to the Tech Interview Cheat Sheet Repo and has been expanded and even has code challenges you can run and practice against!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.
Intro
Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback".
I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
You probably don't know how to do Prompt Engineering, let me educate you.
You probably don't know how to do Prompt Engineering
(This post could also be titled "Features missing from most LLM front-ends that should exist")
Apologies for the snarky title, but there has been a huge amount of discussion around so called "Prompt Engineering" these past few months on all kinds of platforms. Much of it is coming from individuals who are peddling around an awful lot of "Prompting" and very little "Engineering".
Most of these discussions are little more than users finding that writing more creative and complicated prompts can help them solve a task that a more simple prompt was unable to help with. I claim this is not Prompt Engineering. This is not to say that crafting good prompts is not a difficult task, but it does not involve doing any kind of sophisticated modifications to general "template" of a prompt.
Others, who I think do deserve to call themselves "Prompt Engineers" (and an awful lot more than that), have been writing about and utilizing the rich new eco-system