Skip to content

Commit

Permalink
Merge pull request #29 from MarcusElwin/fix-ner-post-v2
Browse files Browse the repository at this point in the history
fix: Small typos fixes
  • Loading branch information
MarcusElwin authored Jan 25, 2024
2 parents ff1eb17 + d02d66e commit 8abb37f
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 9 deletions.
10 changes: 5 additions & 5 deletions ds-with-mac/content/about/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,22 @@ title: Hi, my name is Marcus.
seo_title: About
description: Learn more about my background and experience.
---
Welcome to my blog, here I will share my thoughts on building *data products* using ML and Data Science. Everything I share here are my own opinions and do not reflect the opinions of the companies I have been working for.
Welcome to my blog, here I will share my thoughts on building *data products* using ML and Data Science. Everything I share here are my **own opinions** and does not reflect the opinions of the companies I have been working for.

## Who am I?

I'm a tech and people interested recovering data scientist turned product manager. I am also a big fan of food :pizza: (*foodie*) and music (I play bass guitar :guitar: in a band).
I'm a tech and people interested recovering data scientist turned product manager (turned data scientist/ml engineer again). I am also a big fan of food :pizza: (*foodie*) and music (I play bass guitar :guitar: in a band).

## My Experience

I'm a Senior Data Scientist turned Product Manager, living in Stockholm, :flag-se: that have been working with Data Science, Machine Learning and ML Systems for the past 5+ years in a mix of companies and industries ranging from *retail* to *fintech*. NLP and LLMs are some of my current focus areas as well as learning the ropes of *product management*.
I'm a Senior Data Scientist & ML Engineer, living in Stockholm, flag-se: who has been working with Data Science, Machine Learning, Product Management and ML Systems for the past 5+ years in a mix of companies and industries ranging from *retail* to *fintech*. NLP and LLMs are some of my current focus areas as well as learning the ropes of *product management*.

I also have experience from other types of ML use cases such as:
I also have experience with other types of ML use cases such as:
* Demand forecasting,
* Time series analysis,
* Churn prediction,
* Optimization
* Reinforcement Learning for Trading
* Customer segmentation.

I'm currently employed at [Tink](https://tink.com/), where I work with enriching open banking data (PSD2) for risk use cases, using Machine Learning and Data Science techniques. Python, SQL (big fan of *BigQuery*) are my go-to tools :tools: , but I do occasionally use other languages such as Java.
I'm currently employed at [PocketLaw](https://pocketlaw.com/), where I work with generative AI and Machine Learning in the legal domain for various use cases. Python and SQL (big fan of *BigQuery*) are my go-to tools :tools: but I do occasionally use other languages such as Java, and TypeScript (Node.js).
8 changes: 4 additions & 4 deletions ds-with-mac/content/posts/prompt-eng-ner/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ author: Marcus Elwin

draft: false
date: 2024-01-21T16:23:42+01:00
lastmod:
lastmod: 2024-01-25T16:23:42+01:00
expiryDate:
publishDate:

Expand All @@ -31,7 +31,7 @@ newsletter: false
disable_comments: false
---

**2023** was the year of *exploration*, *testing* and *proof-of-concepts* or deployment of smaller LLM-powered workflows/use cases for many organizations. Whilst 2024 will likely be the year where we will see even more production systems leveraging LLMs. Compared to a traditional ML system where data (examples, labels), model and weights are artifacts, prompts are the **main** artifacts. Prompts and prompt engineering are used for driving a certain behavior of an assistant or agent.
**2023** was the year of *exploration*, *testing* and *proof-of-concepts* or deployment of smaller LLM-powered workflows/use cases for many organizations. Whilst 2024 will likely be the year where we will see even more production systems leveraging LLMs. Compared to a traditional ML system where data (examples, labels), model and weights are some of the main artifacts, prompts are instead the **main**** artifacts. Prompts and prompt engineering are fundamental in driving a certain behavior of an assistant or agent, for your use case.

Therefore many of the large players as well as academia have provided guides on how to prompt LLMs efficiently:
1. :computer: [OpenAI Prompt Engineering](https://platform.openai.com/docs/guides/prompt-engineering)
Expand All @@ -50,7 +50,7 @@ Prompt Engineering sometimes feels more like an *art* compared to *science* but
One definition of `Prompt Engineering` is shown below:

{{< notice info >}}
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs).
**Prompt engineering** is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs).

Prompt engineering is not just about designing and developing prompts. It encompasses a wide range of skills and techniques that are useful for interacting and developing with LLMs. It's an important skill to interface, build with, and understand the capabilities of LLMs. You can use prompt engineering to improve the safety of LLMs and build new capabilities like augmenting LLMs with domain knowledge and external tools.
— <cite>Prompt Engineering Guide[^1]</cite>
Expand Down Expand Up @@ -350,7 +350,7 @@ Example output with this update prompt is shown below:

## Technique #6 - Use Chain-of-Thought
{{< notice info >}}
Chain-of-Thought (CoT) is a prompting technique where each input question is followed by an intermediate reasoning step, that leads to the final answer. This shown to improve the the output from LLMs. There is also a slight variation of CoT called _Zero-Shot Chain-of-Thought_ where you introduce **“Let’s think step by step”** to guide the LLM's reasoning.
**Chain-of-Thought** (CoT) is a prompting technique where each input question is followed by an intermediate reasoning step, that leads to the final answer. This shown to improve the the output from LLMs. There is also a slight variation of CoT called _Zero-Shot Chain-of-Thought_ where you introduce **“Let’s think step by step”** to guide the LLM's reasoning.
{{< /notice >}}

An update to the prompt now using *Zero-Shot Chain-of-Thought* would be:
Expand Down

0 comments on commit 8abb37f

Please sign in to comment.