Skip to content

Instantly share code, notes, and snippets.

@raihan71
Last active November 26, 2024 22:51
Show Gist options
  • Save raihan71/a5570cdf9d87095e68e9eb1e678da839 to your computer and use it in GitHub Desktop.
Save raihan71/a5570cdf9d87095e68e9eb1e678da839 to your computer and use it in GitHub Desktop.
this is my speech-text for ijs in november

hi everyone! it's great to be here today, i'm really excited! so, a few months ago i got diagnosed lymph gland by the doctor, something issue with my lymph nodes in my right legs, the doctor also said it's infected by bacterial same as bacteria in tuberculous aka Tuberculous (tuberkeles) Lymphadenitis (limpadenaitis). In my mind always kept saying like i'm afraid i can't make it, i'm afraid i can't make it to the conference. After surgery taking my medication, recovery. here i am, i'm not only standing here healed feeling better but also more passionate than ever to share my journey learning about the prompt engineering.

Thankyou for the organizer for the opportunity back to the conference, and ofc thank to all amazing attendees here, can we make so noice and give a applause 👏 *huuu

so yeah, let's dive in to prompt engineering for web developers

as you know my name is raihan nismara, you call me raihan i'm from indonesia, currently i'm assigned to front-side front end web team under telkom group indonesia you can also always connect with me feel free to follow, ask me anything i'm also running a podcast as well for fun you can try to listen if you like to. so that’s the little about me.

let's keep moving

so, here are list the main topics we're going to covered up for 40 minutes ahead including the subtopic as well alright. the first point: what is prompt engineering? we also going to delve the ai history and other stuff of the ai itself. Over the past few years, we’ve seen software development rapidly evolve, with AI transforming the way we write and generate code. Traditionally, developers worked line by line, defining every detail explicitly. But today, thanks to advancements in artificial intelligence and natural language processing, we’re entering an era where we can guide our applications with high-level instructions, or prompts, aka human languages. This approach is called Prompt Engineering, (see slide) and it's actually reshaping how we think about development.

First chapter: ai introduction

before dive deep more further, we might wondering what's is going with the software development rigth now, i mean when this all happened everything the ai, prompt, just suddenly appread out of nowehere. When did AI and prompts start impacting our work in such a big way?

let's take a thousand steps back for that, the answer it all started from,the answer is... from greece history, where philosophers debates ideas and sought wisdom... let's explain it one by one, but, oh actually, that's a bit too far from there, we don't have time. so, Let's fast forward a few thousands years, and let's start with... when the first computer was invented.

History Computers

  • Our story begins in the 1600s, when mathematicians and inventors seek to build machines to make calculations easier. Blaise Pascal, a French mathematician, created one of the first mechanical calculators, called the Pascaline, in 1642.

  • Fast-forward to the early 19th century, when English mathematician Charles Babbage envisioned a more complex machine. In 1837, he designed the Analytical Engine, a fully mechanical computer capable of performing any calculation. Babbage’s computer design laid the conceptual groundwork for modern computers.

  • The Birth of the Transistor: The 1950s coming

  • The IC (Integrated Circuit) Revolution: The 1960s-70s By the 1970s, the integrated circuit had enabled the creation of the microprocessor, a single chip that could function as a computer's CPU.

  • The 1980s marked the dawn of the personal computer (PC).

  • The Internet Age: 1990s

  • The Mobile and Cloud Computing Revolution: 2000s-2010s As we entered the 2000s, computers became even smaller and more portable.

Over time, with advances in processing power and the introduction of the internet, computers became capable of handling enormous amounts of data.

Fast forward to this day, and we’re in an era where AI, especially with large language models, can interpret and generate human-like responses, opening up possibilities we couldn’t have imagined before.

If we see the patterns it's connecting all the dots somehow, it all connected it has relation to each other.

History AI

Artificial Intelligence by the definition is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy. Same thing like computers, ai has evolved over the years.

  • Foundation of ai was in the 1940s-1950s, where the first articial neurons we conceptualised and introduced by Warren McCulloch and Walter Pitts. in 1950s Alan introduced the world to the Turing Test, it was a framework to discern intelligent machines, setting the wheels for the giant computer or computer first generation.
  • Six years later, in 1956, a group of visionaries convened at the Dartmouth Conference hosted by John McCarthy, where the term “Artificial Intelligence” was first coined, setting the stage for decades of innovation.
  • Early development, in late 60s and 70s the first nlp integrated in computer application or we can say it the first chatbot implementations it was built at MIT by Joseph Weizenbaum It worked by recognizing keywords in a user's statement and then reflecting them back in the form of simple phrases or questions, For instance, if a user input, "I feel sad," ELIZA might respond with "Why do you feel sad?" This approach created the illusion of understanding.
  • The 1980s were a period of both strife and regeneration for the AI community. The decade kicked off with reduced funding, marking that age with the ‘AI Winter.’
  • 1990s: Revival and Emergence of Machine Learning, Earlier, in 1996, the LOOM project came into existence, exploring the realms of knowledge representation and laying down the pathways for the meteoric rise of generative AI in the ensuing years.
  • 2000s: The Genesis of Generative AI As we rolled into the new millennium,
  • 2010s: Rise of AI and Breakthroughs

and the AI landscape has been evolved becoming more complex and, in many ways, more specialized. and the AI landscape isn’t just one big field these days; it’s grown into a vast ecosystem with many specialized branches, or subsets, from computer vision to natural language processing (NLP), robotics, and, of course, large language models.

but we're only going to foucs with two here: (see slide)

And behind of prompt there's the model that big company trained like chatgpt, if you take a look how it build was procced with so many subsets and been going on with the process how model tranied.

NLP

I want to highlight in here about deep learning and how it named with the gen-ai. so, gen ai is part of one of the algorithm in machine learning or you could say a subset, and the processing of how it made, was using the technique of deep learning and trained with a lots of data could be terabyte. and the technique of deep learning there's two: 1. discriminative, 2. generative

  • discriminative : here's how it works it classify first then it will deciding a cat or a dog
  • generative : the way it works it will generate new things for example image of a cat that's why based definition generative ai it self is to generate something new or making decision with the existing and it was designed to understand human naturally.

and if you wondering what is the part of nlp in llm, if you take a look in the slide nlp is a branch of ai that focusing or enabling computer to understand, interpret and produce human language. In essence, NLP is what enables generative AI models to understand prompts and produce meaningful responses. It's like the "brain" behind the language skills of an LLM, allowing it to parse and generate text in a way that feels natural and relevant to us.

Second chapter: LLMs

LLM could take inserted parameter more than millinions parameter that's why it could recevied instruction with such complex languages or inputs.

Natural Language Processing (NLP):Semantic Parsing

How it works the llms take parameter then breaking down sentences to understand the structure and meaning of each component.Named Entity Recognition (NER): Identifying and classifying key entities (e.g., people, places, dates) within text.Sentiment Analysis: Determining the sentiment or emotion expressed in text. And there's two of llm: base llm, instructed-tuned llm;

fun fact, eliza the first chatbot implementation the model base it's just a set of pre-defined rules, essentially a pattern-matching system, that allows the program to identify keywords within a user's input and generate responses by substituting those keywords with pre-written phrases. And it;s kinda interesting this approach generating hallucination of the understanding. It;s kinda interesting as well how everything labeled as ai these days, haha even the program only like if else haha.. but those actually true i mean indeed the ai itself has if else condition and obviously is not just that modern ai has bunch of complex functions, neural network and etc. nevertheless, as a human, the ai has the special place in our heart, we still amazed how the ai evolve into something more human-like when we interact and we fall in love with that. That feeling is called eliza effect, when people think a computer or robot really understands them and has feelings, even though it's just following a program.

Just like the other program llms also can be run as program in your local machine computer, for example these are the big tree models that are popular right now (see the slide). Other than that we also have (see the slide) So, to run those models what you need first is capable device like minimum with 8gb of ram, i think 8gb is not enough like mine then we can start with ollama desktop. You can choose the model earlier on ollama you can go to https://ollama.com/search

Third Chapter: Prompting

ok, so here's the part that we've been waiting for, let's the party begin we're going to dimistifying how to leveraging prompting for our coding productivy.

one important key to agree here, "what do you think makes a good developer?" Take a moment to think about it. Is it techinal skills? or maybe knack for innnovation? Problem-solving ability? that's right one of that makes a good developer is having excellent problem solving, how developers usually problem solving when we find the erros?.

Googling that's right searching... haha.

In Prompt engineering is actually quite similar. Instead of Googling, we’re ‘prompting’ a model, crafting specific inputs to guide the AI toward the response we’re looking for. So, I encourage you to think of prompt engineering as an extension of a skill you’re already familiar with 'good at searching'. It’s about refining how we ask questions and strategically guiding the AI, in much the same way you would refine a search. With the right prompts, we can unlock the potential of AI to streamline our work and amplify our capabilities as developers. so far so great? do you guys agree with that say 'agreed'

Prompting Technique

, just like any other stuff, we have frameworks in software development,think like scrum for agile workflow, or mvc for structuring applications, prompt engineering also has its own set of frameworks designed to make prompting more effective and consistent its more like a technique.. For me when you prompting is like when learn language especially using english, if you take look llms to existing resources right now, english is the only option for complete or complex response if you comparing with other languages due to limited resources trained data i guess, but hopefully in near future maybe we can use any other language for prompt with the good response as good as using english language.

alright, the first technique we could use is:

  • in-contenxt learning: In-context learning is a technique used by LLMs, where the model learns and generalizes from the examples provided in the input (the context).
  • No-Context Learning: unlikey in-contenxt no-context is the opposite of in-context, no-context is similar to zero-shot prompting, the model is given only the task without any spesific example or context: this technique used by most of poeople i guess
  • persona: where the model given spesific persona or set of characteristics, behaviour or roles. For example role as, as if
  • zero-shot: so same thing with no-context learning
  • few-shot: this technique is where the model given a few of examples of the task or context qquiet similar to in-context learning but this more common
  • chain of thought: this echnique in natural language processing that encourages model to generate more detailed and reasoned responses by articulating their reasoning process step-by-step.
  • react: This technique focuses on how LLMs “reason” through the input they receive (the prompt), and then “act” by producing text or performing a specific task based on that reasoning.
  • iteration: a method used in programming large language models LLMs to iteratively refine the generated responses or actions based on feedback, further instructions
  • comparsions: A method where the user ask a model two or more entities, concepts based on specific criteria. It can be useful in situations that require decision-making or analysis of multiple options.
  • critique me: ask the model to get the feedback from your input asap or based in your context you requested
  • laddering: breaking down the question into pieces one by one make easy to get what you want one step at a time
  • 4th grader: this one, is my favourite, a method used in prompt to ask model explain complex stuff into understandable or simple logic as simple as how 4th grade children also understand.

other than that, we can also set in different format meaning when you want your llms or the model setting in oustanding format not only just text but also in more clean or well-structured we can ask with different format:

  • tabular format: prompt engineering tabular format refers to structuring information in a table format
  • summarize with bullets: A method where you instruct a model to condense a large piece of text or information into a shorter, more concise version. The goal is to capture the key points or main ideas without losing the essential meaning.

let's say we still don't where to start with these techniques but we just want to explore prompt that already written by anyone else aka we just want some templates then modify right away, the answer is that we can leverage prompt templates:

  • promp templates: here are the lists of collection templates pack that you can explore yourself from many categories; tech, content creator, marketing, so and so forth.

Coding Assitant

Let's say we tired of chatbots we just want something more in-action help support for our coding, like coding assistant. There a lots of coding assistants out there let's spill it one by one, what do you will be the first one?

  • yes, github copilot developed by github and openai, initial release 2021,
  • tabnine, funfact tabnine actually the first pioneer in gen ai for software development or we can say tabnine actually the first coding assitant was launched in 2018.
  • replit
  • cursor.sh
  • amazon codewhisperer
  • snyk.io deepcode-ai
  • marscode
  • cody
  • sqlai

Almost Last Chapter: Future Development

So, we’ve explored how coding assistants like Copilot, ChatGPT, and others are transforming the way we write code, and we’ve touched on how mastering prompt techniques can amplify their effectiveness. But let’s take a moment to think about where all of this is heading. The future of web development is intrinsically linked with Artificial Intelligence. As AI becomes more sophisticated, we can anticipate more websites incorporating AI-driven functionality into their web design. Web designers will be equipped with AI tools to simplify the design process, accurately predict user behavior, and personalize the user experience. These advancements will enable designers to focus on creativity and innovation, further enhancing the digital landscape.

summary

in summary (see slide)

So, i guess in the near future developers will less care about comparing techstack like technilcal over debate like which framework is good for you react, angular, so on so forth. What they really care how fast you can generate of code yet accurate within 99% and prompt engineering itself will always evolve or maybe we dont need the technique at all in the future cause the model already like even more smarter like we only give little input then the model already known for that, so yeah let's see...

so i think that's all end of talk, thankyou to joining my session once again, see you around peace enjoy the conference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment