5 Frontend Tools I Actually Have Open
On React, 3D, design agents, and the one site that still teaches me things
April 2026 Β· 7 min read
Last week I was building a client landing page at 10pm. Deadline the next morning. I needed a scroll-triggered 3D element, a component that animated on mount, and the whole thing had to match a Figma file I hadn't designed.
Three years ago that would have taken three days and two specialists.
I shipped it before midnight.
The tools changed. The 10pm didn't.
Here are the five things that made that night possible β and that I keep opening every week, not just when it's urgent.
1. React Bits β when component libraries stop being boring
π€ Dev Mode
React Bits is a collection of animated, interactive UI components built for React. Unlike shadcn or Radix β which are excellent but deliberately unopinionated about visuals β React Bits ships with motion built in. Components use CSS animations and lightweight JS to handle entrance states, hover interactions, and scroll behavior out of the box. You copy the source, drop it in your project, and own it completely. No package to maintain, no version conflicts, no upstream breaking changes to worry about.
The components are built on standard patterns β nothing exotic β which means they integrate cleanly with whatever stack you're already running.
πΆ Just Follow Along
Most component libraries give you a button that looks like every other button on the internet.
React Bits gives you buttons that actually do something when you hover them. Text that moves. Cards that respond. Backgrounds that feel alive.
You copy the code, paste it in, tweak the colors. It takes five minutes and makes the prototype look like you spent a week on it. For a freelancer on a deadline, that's not a small thing.
2. Figma MCP β the design system that finally talks to the AI
π€ Dev Mode
Figma's MCP server exposes your file's design tokens, component variants, and layout data as callable functions that AI agents can query at runtime. When you give Claude or Cursor access to it, the model doesn't just see your prompt β it reads the actual design system. Typography scales, spacing tokens, component props, color variables. All available as structured context before a single line of code is generated.
The integration lives in your mcp_config.json like any other MCP server. One block, one API key, one refresh.
{
"mcpServers": {
"figma": {
"serverUrl": "https://mcp.figma.com/mcp",
"headers": {
"X-Figma-Token": "YOUR-KEY"
}
}
}
}After that, "build this component to match the design system" actually means something to the agent.
πΆ Just Follow Along
Before this existed, the workflow was: open Figma, read the values, write them down, tell the AI "the primary color is #1A1A2E and the heading font is 32px" and pray it remembered.
Now the AI just knows. It reads the Figma file itself.
It's the difference between describing a room to someone over the phone and walking them into it. Same information, completely different result.
If you work with clients who have an established brand, this one is a genuine time-saver.
3. Omma β 3D on the web without the seven-tool pipeline
π€ Dev Mode
Omma is an agentic creative platform built by the team behind Spline, launched March 24, 2026. It orchestrates multiple AI agents in parallel β one handles code generation (Three.js, Tailwind, GLSL shaders), one generates 3D geometry in GLTF/GLB/OBJ, one manages image and media assets. The parallel execution matters: traditional sequential generation creates integration debt at every handoff. Omma resolves the dependencies in real time, so the code and the assets are built to fit each other.
Output is production-ready and deployable. The data pipeline accepts CSV, JSON, DOC, GLTF, PNG, SVG, and MP4 β which means you can feed real data into a 3D data visualization, not just hardcode placeholder values.
Free plan is limited (50 credits, 5 chats/month). Heavy 3D generation exhausts credits faster than you'd expect. Budget accordingly.
πΆ Just Follow Along
Before Omma, getting a 3D element on a web page meant: Blender for the model, a Three.js tutorial you didn't fully understand, an asset pipeline you had to configure yourself, and two days you didn't have.
Now you describe what you want β "a dark landing page with a rotating 3D product sphere, scroll-triggered reveal, particle background" β and Omma builds the code, the 3D model, and the animations simultaneously.
It's not magic. Complex scenes still need iteration. But for splash pages and prototypes that need to look expensive fast, nothing else comes close right now.
It launched two weeks ago and almost nobody in Italy is writing about it yet. Consider this your early warning.
4. Lovable β where the code actually comes out working
π€ Dev Mode
Lovable is a prompt-to-code platform that generates full React applications with routing, state management, and component architecture. Unlike pure code generators, it maintains project context across iterations β you can refine, extend, and debug within the same session without losing the structure it built.
The stack it outputs is clean enough to take outside: standard React, deployable anywhere. No vendor lock-in past the generation step. My workflow is Lovable for scaffolding, Claude Code for complex logic, Git throughout, Netlify for deployment. The seams are invisible to the client.
It's not autonomous. You still need to know what you're looking at. But the boilerplate problem β the thing that burns the first two days of any project β is effectively solved.
πΆ Just Follow Along
I've used Lovable to build sites that are live right now, with real clients, getting real traffic.
That's the only thing that matters when someone asks if a tool is worth it.
The honest version: it gets you 80% of the way there faster than anything else. The last 20% is still on you. But if you know what you're doing, 80% in two hours instead of two days changes the economics of freelance work completely.
5. Codrops β the site that still teaches me how things actually work
π€ Dev Mode
Codrops publishes deep-dive frontend tutorials focused on advanced visual techniques β WebGL, GLSL shaders, GSAP ScrollTrigger implementations, CSS scroll-driven animations, Three.js scene construction. Each article ships with a live demo and commented source code. The technical depth is consistent and the examples are original β not rehashes of Stack Overflow answers.
It's not a tool. It's a knowledge resource. But in a world where AI generates the code, understanding what the code is doing becomes the actual differentiator.
πΆ Just Follow Along
Codrops is not an AI tool. It doesn't generate anything.
It's the website I open when I want to understand how a parallax effect actually works β not just have the AI produce one. When I want to know why a scroll animation feels off, not just ask someone to fix it.
The difference between a developer who uses AI tools and one who just depends on them is whether they understand what's happening underneath.
Codrops is where I go to stay on the right side of that line.
The Stack in One Table
| Tool | Category | What it solves |
|---|---|---|
| React Bits | Component library | Visual quality without starting from zero |
| Figma MCP | AI-design bridge | Design system β code, no manual handoff |
| Omma | 3D/web generation | Interactive 3D without a specialist stack |
| Lovable | Code generation | Prototypes and real sites, fast |
| Codrops | Technical reference | Understanding what you're shipping |
These aren't tools I tested for an article.
They're tools I had open last Tuesday.
Filed under: things that made a 10pm deadline feel manageable.