The B2B SaaS Kit is an open-source starter toolkit for developers looking to quickly stand up a SaaS product where the customer can be a team of users (i.e., a business).
The kit uses TypeScript, Astro, React, Tailwind CSS, and a number of third-party services that take care of essential, yet peripheral requirements, such as secrets management, user authentication, a database, product analytics, customer support, payments, and deployment infrastructure.
The kit is designed with two primary goals in mind:
-
Start with a fully-functional, relatively complex application. Then, modify it to become your own product.
-
You should be able to build an app to validate your idea for the cost of a domain name - all the third-party services used by the kit offer meaningful free-forever starter plans.
"B2B" means "business-to-business". In the simplest terms, a B2B product is a product where post-signup, a user can create an organization, invite others, and do something as a team.
B2B companies are fairly common - for example, over 40% of Y Combinator-funded startups self-identify as B2B - but B2B-specific starter kits appear to be quite rare, hence this effort.
-
First, check out https://PromptsWithFriends.com - it's an example app built with this kit. Prompts with Friends is a way to collaborate on GPT prompts with others
-
Next, get your own copy of Prompts with Friends running locally on your machine
-
Then, learn how to deploy your version to production
-
Lastly, build your own product by modifying the app
-
Install prerequisites
- Node.js 18 or Node.js 20
⚠️ Warning: Will not work with Node.js 19 due to bug inset-cookie
implementation that was fixed in Node.js 20 -
Clone repo, start app
git clone [email protected]:fogbender/b2b-saaskit.git
cd b2b-saaskit
corepack enable
corepack prepare [email protected] --activate
yarn
yarn dev
-
Open http://localhost:3000 in a browser tab - you should see a page titled "Welcome to Prompts with Friends"
-
You'll find detailed configuration instructions on http://localhost:3000/setup. Once you're here -
- you should be able to have a working copy of Prompts with Friends running on http://localhost:3000/app
Please refer to section 9. Production deployment to Vercel
of the setup guide
We're using Drizzle ORM to manage database migrations in B2B SaaS Kit. For details on Drizzle, see https://orm.drizzle.team/kit-docs/overview.
Another popular ORM option is Prisma - if you'd like to use Prisma instead of Drizzle and need help, please get in touch with us.
Expand
First, you'd have to make a few changes in src/db/schema.ts
. The changes you make will only affect your TypeScript code, not the actual table data.
export const example = pgTable("example", {
exampleId: text("id").primaryKey(),
value: integer("value").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
});
If you try to access the example
table, you'll get a runtime error. To fix this, you have to run a "migration", which applies a set of updates - which may include some combination of schema and data changes - to the database.
Drizzle migrations happen in two steps: the first step generates a migration file, the second step applies the migration to the database.
To generate a migration file, run
doppler run yarn drizzle-kit generate:pg
This will generate a file called something like src/db/migration/1234_xyz.sql
. Under normal circumstances, you wouldn't have to worry about this file - it will contain an auto-generated set of SQL statements needed to apply the changes expressed in your schema.ts
to the database. However, since we're using Supabase Postgres, we have to take care of Row Level Security policies when creating new tables.
To do this, open the migration file and add the following to end, making sure to change example
to the table name you're using:
ALTER TABLE example ENABLE ROW LEVEL SECURITY;
CREATE POLICY "service" ON "public"."example" AS PERMISSIVE FOR ALL TO service_role USING (true);
Finally, run the migration:
doppler run yarn migrate
If you open your Postgres console (e.g., Supabase or psql), you'll see the new table.
Migrating your development database will not migrate the production one. To migrate production, run migrate
with production configuration:
doppler run yarn migrate --config prd
We settled on tRPC to take care of the "API" part of the app. tRPC's excellent integration with TanStack Query (formerly React Query) and Zod sealed the deal for us, because we considered ease of refactoring and typesafety very important for a codebase that's meant to be heavily modified. There are other reasons to like tRPC: it ships with a set of great features, like middlewares, serialization, input validation, and error handling.
Expand
Backend routing for tRPC starts with export const appRouter
in the src/lib/trpc/root.ts
file.
To add a new endpoint - say, a simple counter - add a counterRouter
to appRouter
:
+ import { counterRouter } from './routers/counter';
export const appRouter = createTRPCRouter({
hello: helloRouter,
auth: authRouter,
prompts: promptsRouter,
settings: settingsRouter,
surveys: surveysRouter,
+ counter: counterRouter
});
Next, create a router file called src/lib/trpc/routers/counter.ts
:
import { createTRPCRouter, publicProcedure } from "../trpc";
let i = 0;
export const counterRouter = createTRPCRouter({
getCount: publicProcedure.query(async () => {
await new Promise((resolve) => setTimeout(resolve, 300));
return i;
}),
increment: publicProcedure.mutation(() => {
return ++i;
}),
});
What's happening here?
createTRPCRouter
creates a new router mounted inappRouter
. Routers can be nested ad infinitum.publicProcedure
is a way to add a remote call that doesn't perform any checks in the middle (i.e., without any middlewares). For examples of procedure builders that do perform additional checks, seeauthProcedure
ororgProcedure
insrc/lib/trpc/trpc.ts
.- Such checks are often implemented with "middlewares". tRPC middlewares add data to the
ctx
object, which is passed as input to all tRPC functions. Middleware are commonly used to perform access control checks or input validation. query
andmutation
correspond touseQuery
anduseMutation
in TanStack Query, respectively. You can think of these asGET
andPOST
requests.- We're using a simple variable to store the counter value to illustrate that it lives outside the frontend code. In a real app, similar functionality would be handled by a database, whose role is to persist data across frontend nodes and server restarts.
- We are using
await new Promise((resolve) => setTimeout(resolve, 300));
to simulate network latency - useful for testing loading states during development.
Now that we have the backend code in place, let's call it from a new page src/components/Counter.tsx
:
import { trpc, TRPCProvider } from "./trpc";
export function Counter() {
return (
<TRPCProvider>
<CounterInternal />
</TRPCProvider>
);
}
const CounterInternal = () => {
const counterQuery = trpc.counter.getCount.useQuery();
const trpcUtils = trpc.useContext();
const incrementMutation = trpc.counter.increment.useMutation({
async onSettled() {
await trpcUtils.counter.getCount.invalidate();
},
});
return (
<div className="container mx-auto mt-8">
Count: {counterQuery.data ?? "loading..."}
<br />
<button
disabled={incrementMutation.isLoading}
className="rounded bg-blue-500 px-4 py-2 text-white disabled:bg-gray-400"
onClick={() => {
incrementMutation.mutate();
}}
>
Increase count
</button>
</div>
);
};
What's happening here?
- We're wrapping our components in
TRPCProvider
, otherwiseuseQuery
anduseMutation
won't work. Usually, this is done higher up in the component tree. const counterQuery = trpc.counter.getCount.useQuery();
is how we call the backend endpoint. You can think ofcounter.getCount
as a path to the endpoint that corresponds tocounterRouter
from the previous step. If you've used TanStack Query, you can think oftrpc.[path].useQuery()
as the equivalent ofuseQuery({ queryKey: [path], queryFn: () => fetch('http://localhost:3000/api/trpc/[path]') })
.trpc.useContext
is a tRPC wrapper aroundqueryClient
. Its main purpose is to update the cache (used for optimistic updates and re-fetch queries. In our example, we'd like to see the new value for the counter immediately after clicking the "Increase count" button.useMutation
is similar touseQuery
, but it must be called manually withincrementMutation.mutate()
and it'll never auto re-fetch likeuseQuery
. (By default,useQuery
re-fetches on window focus.) Conceptually, callinguseMutation
is similar to making a "POST" request - in our example, calling it causes the counter value to increase.onSettled
is called after the mutation completes, either successfully or with an error. Because we know that the counter value has changed on the server, we know the value incounterQuery.data
is out of date. To get the current value, we invalide thegetCounter
cache, which immediately triggers auseQuery
re-fetch.invalidate()
returns a Promise that gets resolved once the new value of the counter is fetched. While we're waiting on this Promise inincrementMutation.onSettled
, the value ofincrementMutation.isLoading
is set totrue
, until the new value ofcounterQuery
is available.- We use
{incrementMutation.isLoading}
to place the button indisabled
state - this prevents the user from clicking the button multiple times, ensuring the same user can only increment the counter by one with each click.
Now that we have a Counter
component, let's add a way to show it by creating a src/pages/counter.astro
file with the following content:
---
import { Counter } from "../components/Counter";
import Layout from "../layouts/Layout.astro";
---
<Layout title="Counter">
<Counter client:only="react" />
</Layout>
To see it in action, open http://localhost:3000/counter.
So far, what we've done is called "Client-side Render" (CSR), meaning the client starts off with no known state (counter is undefined), and builds it up by querying the server. If you're familiar with CRA (Create React App) or Vite, this is exactly how those frameworks work.
This approach may work well for an app where the user is an authenticated human operating a web browser, but if the user is a search engine retrieving a blog post for indexing or a messaging system trying to unfurl a URL to display useful metadata, not so much. To get actual content, the search engine or messaging system would have to run the JavaScript to build up the state of the page in question by querying the server - something they may simply be unwilling to do due to cost or performance constraints.
Mechanisms that return pre-rendered content to the caller can either assemble it "on the fly", while processing a request (also called "server-side render", or SSR), or by simply reading it from disk (also called "build-time generation" or "static site generation", or SSG). For example, say you've got a site with tens of thousands of products, and each product page needs to display the name of the product in question - instead of generating tens of thousands of static pages, which might be prohibitively expensive, you can generate the title for each product page "at request time". Alternatively, say you've got a robots.txt
file, where the content differs between staging and production environments - this file would be generated and written to disk "at build time", generally during deployment.
SSR is often mentioned in discussions that involve generating og:image
tags and other SEO-focused operations.
Our setup lets us do both SSR and SSG - let's take a look how.
First, we'll change our src/pages/counter.astro
to the following:
---
import { Counter } from "../components/Counter";
import Layout from "../layouts/Layout.astro";
import { createHelpers } from "../lib/trpc/root";
export const prerender = false;
const helpers = createHelpers(Astro);
const count = await helpers.counter.getCount.fetch();
---
<Layout title={"Counter initial value is " + count}>
<Counter client:load dehydratedState={helpers.dehydrate()} />
</Layout>
Note that in Astro, the code between ---
(called "frontmatter") runs on the server and is not sent to the client.
You can think of export const prerender = ...
as a per-route switch between SSG and SSR. prerender = true
means "build-time" (SSG), and prerender = false
means "request-time" (SSR). If export const prerender
isn't defined, its default value depends on the value of output
in the astro.config.mjs
settings file. In our case, output
is hybrid
, which means prerender = true
by default. For more info on this, see https://docs.astro.build/en/guides/server-side-rendering/.
If you're familiar with other full-stack frameworks, using prerender = false
is similar to getServerSideProps
in Next.js and loader
in Remix, while prerender = true
is similar to getStaticProps in Next.js.
createHelpers
is a utility that allows you to perform tRPC procedures server-side.
await helpers.counter.getCount.fetch();
is the first important part of our tRPC SSR integration. If you call fetch()
or prefetch()
on any tRPC procedure before rendering the app, you are pre-populating the TanStack Query cache for those queries.
This means two things.
One, during SSR, useQuery
is usually set to loading
state and data
is undefined
. Now, the queries that we have successfully prefetched will return actual data. In our case, trpc.counter.getCount.useQuery().data
in the React code will be set to 0
instead of undefined
.
Two, during client rendering, the query will have an initial value that can be displayed to a user right away. Our recommended rule of thumb is to prefetch on the server and do background updates on the client for SSG pages, and to prefetch on the server and do no background re-fetches on the client for SSR pages. To control this behavior, we use the staleTime
useQuery
option (more on this later).
dehydratedState={helpers.dehydrate()}
is the second important part of the integration: it allows components to use query cache during SSR, as well as perform actual serialization of the cache into HTML, so that we can use the same data to perform "hydration" of the client components.
To take advantage of these features, we must make some changes in our React code in the src/components/Counter.tsx
file:
+import type { DehydratedState } from '@tanstack/react-query';
+
import { trpc, TRPCProvider } from './trpc';
-export function Counter() {
+export function Counter({ dehydratedState }: { dehydratedState?: DehydratedState }) {
return (
- <TRPCProvider>
+ <TRPCProvider dehydratedState={dehydratedState}>
<CounterInternal />
</TRPCProvider>
);
}
const CounterInternal = () => {
- const counterQuery = trpc.counter.getCount.useQuery();
+ const counterQuery = trpc.counter.getCount.useQuery(undefined, {
+ staleTime: 1000, // one second
+ });
const trpcUtils = trpc.useContext();
We need to pass dehydratedState
we got from the Astro Component into TRPCProvider
, so that when we rerun the same query, we can use values from the cache.
Note that we are also passing staleTime
to useQuery
- this prevents TanStack Query from re-fetching data that we already got from the server (because of prerender=false
, each page load will get fresh data from the server).
In the case of SSG (prerender=true
), the time between query cache creation on the server and the useQuery
call on the client could be much larger, making it more likely that the data is outdated - in this case, it's a good idea to get fresh data from the server on page load.
Read more:
B2B SaaS Kit is licensed under the MIT License. See the LICENSE file for more details.