Skip to main content
Version: 10.x

响应缓存

以下示例使用 Vercel 的边缘缓存 尽快向你的用户提供数据。

¥The below examples uses Vercel's edge caching to serve data to your users as fast as possible.

信息

始终小心缓存 - 尤其是当你处理个人信息时。

¥Always be careful with caching - especially if you handle personal information.

由于默认情况下启用批处理,因此建议在 responseMeta 函数中设置缓存标头,并确保没有任何可能包含个人数据的并发调用 - 或者如果存在 auth 标头或 cookie,则完全省略缓存标头。

¥ \ Since batching is enabled by default, it's recommended to set your cache headers in the responseMeta function and make sure that there are not any concurrent calls that may include personal data - or to omit cache headers completely if there is an auth header or cookie.

你还可以使用 splitLink 来拆分公共请求和那些应该是私有且未缓存的请求。

¥ \ You can also use a splitLink to split your public requests and those that should be private and uncached.

应用缓存

¥App Caching

如果你在应用中打开 SSR,你可能会发现你的应用在 Vercel 等平台上加载缓慢,但实际上你可以在不使用 SSG 的情况下静态渲染整个应用;阅读此 Twitter 帖子 了解更多见解。

¥If you turn on SSR in your app, you might discover that your app loads slowly on, for instance, Vercel, but you can actually statically render your whole app without using SSG; read this Twitter thread for more insights.

示例代码

¥Example code

utils/trpc.tsx
tsx
import { httpBatchLink } from '@trpc/client';
import { createTRPCNext } from '@trpc/next';
import type { AppRouter } from '../server/routers/_app';
export const trpc = createTRPCNext<AppRouter>({
config(opts) {
if (typeof window !== 'undefined') {
return {
links: [
httpBatchLink({
url: '/api/trpc',
}),
],
};
}
const url = process.env.VERCEL_URL
? `https://${process.env.VERCEL_URL}/api/trpc`
: 'http://localhost:3000/api/trpc';
return {
links: {
http: httpBatchLink({
url,
}),
},
};
},
ssr: true,
responseMeta(opts) {
const { clientErrors } = opts;
if (clientErrors.length) {
// propagate http first error from API calls
return {
status: clientErrors[0].data?.httpStatus ?? 500,
};
}
// cache request for 1 day + revalidate once every second
const ONE_DAY_IN_SECONDS = 60 * 60 * 24;
return {
headers: {
'cache-control': `s-maxage=1, stale-while-revalidate=${ONE_DAY_IN_SECONDS}`,
},
};
},
});
utils/trpc.tsx
tsx
import { httpBatchLink } from '@trpc/client';
import { createTRPCNext } from '@trpc/next';
import type { AppRouter } from '../server/routers/_app';
export const trpc = createTRPCNext<AppRouter>({
config(opts) {
if (typeof window !== 'undefined') {
return {
links: [
httpBatchLink({
url: '/api/trpc',
}),
],
};
}
const url = process.env.VERCEL_URL
? `https://${process.env.VERCEL_URL}/api/trpc`
: 'http://localhost:3000/api/trpc';
return {
links: {
http: httpBatchLink({
url,
}),
},
};
},
ssr: true,
responseMeta(opts) {
const { clientErrors } = opts;
if (clientErrors.length) {
// propagate http first error from API calls
return {
status: clientErrors[0].data?.httpStatus ?? 500,
};
}
// cache request for 1 day + revalidate once every second
const ONE_DAY_IN_SECONDS = 60 * 60 * 24;
return {
headers: {
'cache-control': `s-maxage=1, stale-while-revalidate=${ONE_DAY_IN_SECONDS}`,
},
};
},
});

API 响应缓存

¥API Response caching

由于所有查询都是普通的 HTTP GET,因此我们可以使用普通的 HTTP 标头来缓存响应,使响应变得敏捷,让数据库剩余一下,并轻松地将 API 扩展到数以亿计的用户。

¥Since all queries are normal HTTP GETs, we can use normal HTTP headers to cache responses, make the responses snappy, give your database a rest, and easily scale your API to gazillions of users.

使用 responseMeta 缓存响应

¥Using responseMeta to cache responses

假设你将 API 部署在可以处理旧的同时重新验证缓存标头的位置,例如 Vercel。

¥Assuming you're deploying your API somewhere that can handle stale-while-revalidate cache headers like Vercel.

server.ts
tsx
import { initTRPC } from '@trpc/server';
import * as trpcNext from '@trpc/server/adapters/next';
export const createContext = async ({
req,
res,
}: trpcNext.CreateNextContextOptions) => {
return {
req,
res,
prisma,
};
};
type Context = Awaited<ReturnType<typeof createContext>>;
export const t = initTRPC.context<Context>().create();
const waitFor = async (ms: number) =>
new Promise((resolve) => setTimeout(resolve, ms));
export const appRouter = t.router({
public: t.router({
slowQueryCached: t.procedure.query(async (opts) => {
await waitFor(5000); // wait for 5s
return {
lastUpdated: new Date().toJSON(),
};
}),
}),
});
// Exporting type _type_ AppRouter only exposes types that can be used for inference
// https://ts.nodejs.cn/docs/handbook/release-notes/typescript-3-8.html#type-only-imports-and-export
export type AppRouter = typeof appRouter;
// export API handler
export default trpcNext.createNextApiHandler({
router: appRouter,
createContext,
responseMeta(opts) {
const { ctx, paths, errors, type } = opts;
// assuming you have all your public routes with the keyword `public` in them
const allPublic = paths && paths.every((path) => path.includes('public'));
// checking that no procedures errored
const allOk = errors.length === 0;
// checking we're doing a query request
const isQuery = type === 'query';
if (ctx?.res && allPublic && allOk && isQuery) {
// cache request for 1 day + revalidate once every second
const ONE_DAY_IN_SECONDS = 60 * 60 * 24;
return {
headers: {
'cache-control': `s-maxage=1, stale-while-revalidate=${ONE_DAY_IN_SECONDS}`,
},
};
}
return {};
},
});
server.ts
tsx
import { initTRPC } from '@trpc/server';
import * as trpcNext from '@trpc/server/adapters/next';
export const createContext = async ({
req,
res,
}: trpcNext.CreateNextContextOptions) => {
return {
req,
res,
prisma,
};
};
type Context = Awaited<ReturnType<typeof createContext>>;
export const t = initTRPC.context<Context>().create();
const waitFor = async (ms: number) =>
new Promise((resolve) => setTimeout(resolve, ms));
export const appRouter = t.router({
public: t.router({
slowQueryCached: t.procedure.query(async (opts) => {
await waitFor(5000); // wait for 5s
return {
lastUpdated: new Date().toJSON(),
};
}),
}),
});
// Exporting type _type_ AppRouter only exposes types that can be used for inference
// https://ts.nodejs.cn/docs/handbook/release-notes/typescript-3-8.html#type-only-imports-and-export
export type AppRouter = typeof appRouter;
// export API handler
export default trpcNext.createNextApiHandler({
router: appRouter,
createContext,
responseMeta(opts) {
const { ctx, paths, errors, type } = opts;
// assuming you have all your public routes with the keyword `public` in them
const allPublic = paths && paths.every((path) => path.includes('public'));
// checking that no procedures errored
const allOk = errors.length === 0;
// checking we're doing a query request
const isQuery = type === 'query';
if (ctx?.res && allPublic && allOk && isQuery) {
// cache request for 1 day + revalidate once every second
const ONE_DAY_IN_SECONDS = 60 * 60 * 24;
return {
headers: {
'cache-control': `s-maxage=1, stale-while-revalidate=${ONE_DAY_IN_SECONDS}`,
},
};
}
return {};
},
});