How I Set Up Figma-to-Code Using the Figma MCP Server
A practical guide to integrating Figma with GitHub Copilot for automated, standards-compliant UI code generation. Includes lessons learned, model comparisons, and workflow tips.
Design-to-code automation is evolving rapidly, and my recent work integrating Figma with GitHub Copilot has shown just how powerful these new workflows can be. Here’s a step-by-step overview of my setup, key findings, and practical results.
1. Setting Up the Figma MCP Server
Following the official Figma MCP server guide
- Enabled the MCP (Model Context Protocol) server in Figma.
- Connected my Figma workspace to VS Code using the Copilot Chat extension.
- Ensured the Figma plugin and VS Code extension were both authenticated and linked to the same project.
- Used the
#get_codecommand to request code for selected Figma components.
2. Key Integration Improvements
A major breakthrough was Figma’s new ability to provide a screenshot of the selected UI component when the #get_code command is sent. This visual context allows Copilot to generate code that is much closer to the intended design, rather than relying solely on layer metadata.
3. Model Evaluation: Claude Sonnet 3.5 vs. GPT-4.1
I compared two leading LLMs for code generation:
- Claude Sonnet 3.5: Produced code that was more accurate, better aligned with our internal standards, and more developer-friendly.
- GPT-4.1: Generated functional code, but it was often generic and less tailored to our needs.
Based on these results, I conducted all further tests with Claude Sonnet 3.5.
4. Results
- UI Match: The generated code now matches 50% of the Figma design UI—a significant improvement over previous attempts.
- Component Structure: Code structure aligns with our internal standards at 80% accuracy (up from 5-10%).
- Context Awareness: Because Copilot Chat had already been used to debug and refine our codebase, it “learned” our standards before the Figma integration. This prior context was crucial for high-quality results.
5. Lessons Learned
- Visual context is key: Screenshots from Figma dramatically improve code generation accuracy.
- Model choice matters: Claude Sonnet 3.5 outperformed GPT-4.1 for our use case.
- Pre-training the agent: Letting Copilot learn your codebase before Figma-to-code integration yields better, more consistent results.
6. Next Steps
- Continue refining the workflow to increase UI match rates.
- Explore automating more complex component patterns.
- Share feedback with the Figma and Copilot teams to help improve the integration further.
This workflow has already accelerated our design-to-code process for my current organization, and I’m excited to see how these tools continue to evolve. If you’re interested in setting up a similar workflow, I recommend starting with the Figma MCP server guide and experimenting with different LLMs to find the best fit for your team. 🚀