If you’ve been experimenting with Google AI Studio and have generated TypeScript files, you might be wondering how to actually run and deploy them in your own environment.
This post walks you through everything — from setup to deployment — so you can go from AI Studio sample to live, running project in under 15 minutes.
🧩 Step 1: Prerequisites
Before you start, make sure you have:
- Node.js 18+ (or Bun 1.1+)
- A package manager like npm or pnpm
- Your Gemini API key from AI Studio
- The TypeScript files generated by AI Studio
📦 Step 2: Initialize Your Project
Create a new project folder and install dependencies:
mkdir ai-studio-ts && cd ai-studio-ts
npm init -y
npm i @google/generative-ai
npm i -D typescript ts-node tsx @types/node dotenv
npx tsc --init
Update your tsconfig.json for modern Node setups:
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "Bundler",
"strict": true,
"outDir": "dist",
"rootDir": "src",
"resolveJsonModule": true,
"isolatedModules": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src"]
}
📁 Step 3: Add the AI Studio TypeScript Files
Inside your project folder, create a new src/ directory and drop in the .ts files you exported from AI Studio.
Example structure:
ai-studio-ts/
src/
geminiClient.ts
index.ts
🔑 Step 4: Add Your Gemini API Key
Create a .env file in the root folder:
GEMINI_API_KEY=YOUR_KEY_HERE
Load it inside your entry file (src/index.ts):
import 'dotenv/config';
const apiKey = process.env.GEMINI_API_KEY!;
Never hardcode the API key — keep it safe and private!
🤖 Step 5: Run a Sample AI Studio File
AI Studio usually generates something like this:
import 'dotenv/config';
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
export async function runSample(prompt: string) {
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-flash' });
const result = await model.generateContent(prompt);
return result.response.text();
}
if (import.meta.main) {
runSample('Say hello in one sentence.').then(console.log);
}
Run it locally:
npx tsx src/index.ts
🌐 Step 6: Create a Simple API Server (Optional)
If you want to access your Gemini model from a frontend app, create an API layer with Express.
Install dependencies:
npm i express cors
npm i -D @types/express
Then create src/server.ts:
import 'dotenv/config';
import express from 'express';
import cors from 'cors';
import { GoogleGenerativeAI } from '@google/generative-ai';
const app = express();
app.use(cors());
app.use(express.json());
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
app.post('/api/generate', async (req, res) => {
try {
const { prompt } = req.body as { prompt: string };
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-flash' });
const result = await model.generateContent(prompt);
res.json({ text: result.response.text() });
} catch (err: any) {
res.status(500).json({ error: 'Generation failed', details: err.message });
}
});
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`API running on http://localhost:${port}`));
Run your server:
npx tsx src/server.ts
⚙️ Step 7: Build and Deploy
Add scripts to package.json:
{
"scripts": {
"dev": "tsx src/server.ts",
"build": "tsc",
"start": "node dist/server.js"
}
}
Then build:
npm run build
✅ Deploy Options
Vercel
{
"version": 2,
"builds": [{ "src": "src/server.ts", "use": "@vercel/node" }],
"routes": [{ "src": "/(.*)", "dest": "src/server.ts" }]
}
Set your environment variable GEMINI_API_KEY in the Vercel dashboard and deploy with:
npx vercel
Render
- Push your project to GitHub.
- Create a new Web Service.
- Build Command:
npm install && npm run build - Start Command:
npm start
Google Cloud Run
Use this Dockerfile:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 8080
CMD ["node", "dist/server.js"]
Deploy with:
gcloud run deploy ai-studio-ts \
--source . \
--region us-central1 \
--allow-unauthenticated \
--set-env-vars GEMINI_API_KEY=YOUR_KEY
💻 Step 8: Calling the API from a Frontend App
Once deployed, you can send a request from your frontend like this:
const res = await fetch('/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt: 'Write a haiku about TypeScript' })
});
const data = await res.json();
console.log(data.text);
🧠 Step 9: Troubleshooting Tips
| Issue | Solution |
|---|---|
fetch failed or 401 | Check your API key and environment setup |
SyntaxError: Cannot use import statement outside a module | Add "type": "module" to package.json |
CORS errors | Use app.use(cors()) in your Express server |
| Empty responses | Ensure you’re awaiting the model’s .generateContent() call |
🧭 Step 10: Going Further
- Add rate limiting with
express-rate-limit - Log and monitor requests with
pinoorwinston - Validate user input to prevent abuse
- Experiment with different models (e.g.,
gemini-1.5-pro)
🎉 Conclusion
That’s it! You’ve learned how to:
✅ Set up and run TypeScript files generated by AI Studio
✅ Connect securely to the Gemini API
✅ Wrap your AI logic in an Express API
✅ Deploy it anywhere (Vercel, Render, Cloud Run)
With this setup, you can extend your AI Studio prototypes into full-fledged production apps — with your own control, hosting, and integrations.