How I Boosted My Nuxt 3 Portfolio Speed by 3x: From 4.5s to 1.4s
Complete guide to optimizing Nuxt 3 performance with SSG, lazy loading, tree-shaking, and image optimization. Learn how I reduced load time from 4.5s to 1.4s using route prerendering, Vite rollup, and WebP images

When My Beautiful Portfolio Felt Sluggish
I spent weeks building my personal portfolio with Nuxt 3, Vue 3, TypeScript, and Vite. The design looked gorgeous, the animations were smooth, and I was proud of every component I crafted. But then I opened Chrome DevTools and ran a Lighthouse audit.
4.5 seconds. That's how long visitors had to wait before my site became interactive.
For a portfolio that's supposed to showcase my skills as a web developer, that number felt like a confession of failure. I knew I had to fix this, not just for the metrics, but because every second of delay could mean losing a potential client or employer.
So I dedicated a weekend to performance optimization. The result? 1.4 seconds—a 3x improvement that transformed my portfolio from sluggish to snappy. In this guide, I'll walk you through exactly how I did it, sharing the techniques that made the biggest impact.
The Performance Problem
Before diving into solutions, I needed to understand what was actually slow. I ran comprehensive audits using Lighthouse, WebPageTest, and Chrome DevTools Performance tab. The results painted a clear picture:
Initial Metrics (Before Optimization)
| Metric | Score | Issue |
|---|---|---|
| First Contentful Paint (FCP) | 3.9s | Content appeared too late |
| Time to Interactive (TTI) | 4.5s | Users couldn't interact |
| Largest Contentful Paint (LCP) | 4.2s | Main content took forever |
| Total Blocking Time (TBT) | 820ms | JavaScript blocked interaction |
| Bundle Size | 830KB | Way too heavy |
| Cumulative Layout Shift (CLS) | 0.15 | Content jumped around |
Root Causes Identified
After analyzing the waterfall charts and performance traces, I identified four major bottlenecks:
- SSR Configuration Overhead - Every page request triggered server-side rendering, even for completely static content like my About page
- Heavy Component Imports - I was importing entire libraries when I only needed specific functions (looking at you, Lodash)
- Unoptimized Images - High-resolution JPGs and PNGs without compression, no lazy loading, no modern formats
- Everything Loading Upfront - All components and data loaded immediately, regardless of viewport visibility
The scariest part? My portfolio only had about 5 pages. If it was this slow with minimal content, imagine a larger application!
Understanding SSR vs SSG vs ISR
Before fixing anything, I needed to understand the different rendering strategies Nuxt 3 offers. This was the biggest conceptual shift that unlocked all other optimizations.
Server-Side Rendering (SSR)
How it works: For every request, the server generates HTML dynamically by running your Vue components on the server. The fully rendered HTML is sent to the browser.
When to use:
- E-commerce product pages that need real-time inventory
- News sites with constantly updating content
- User dashboards with personalized data
- Search results that change frequently
My mistake: I was using SSR for my portfolio, which has completely static content. Every visitor triggered unnecessary server rendering.
Static Site Generation (SSG)
How it works: During the build process, Nuxt generates HTML files for all your routes. These pre-rendered HTML files are served instantly without any server processing.
When to use:
- Personal portfolios (like mine!)
- Marketing landing pages
- Documentation sites
- Blogs with infrequent updates
The breakthrough: My portfolio was the perfect SSG candidate. Content never changed between deployments, so why render on every request?
Incremental Static Regeneration (ISR)
How it works: Combines SSG with periodic regeneration. Pages are statically generated, but can be refreshed in the background after a certain time period.
When to use:
- Blogs that update daily/weekly
- Product catalogs that change occasionally
- Content that's mostly static but needs freshness
Switching from SSR to SSG
The moment I switched to SSG, my Time to First Byte (TTFB) dropped from 280ms to 12ms. Here's how I configured it:
// nuxt.config.ts
export default defineNuxtConfig({
// Enable static site generation
ssr: true, // Keep SSR enabled for build-time rendering
// Configure Nitro for static output
nitro: {
static: true,
prerender: {
// Explicitly define routes to prerender
routes: [
'/',
'/about',
'/projects',
'/blog',
'/contact'
],
// Automatically discover routes from links
crawlLinks: true,
// Ignore dynamic routes (can be handled separately)
ignore: ['/api/*', '/admin/*']
}
},
// Use static hosting target
target: 'static'
})
Key insight: For static sites, the server does work once during build, not on every request. This is the foundation of performance.
Optimization Technique 1: Route Prerendering Strategy
Even with SSG enabled, I needed to configure which routes got prerendered and which remained dynamic. Not everything should be static.
Smart Route Configuration
// nuxt.config.ts
export default defineNuxtConfig({
nitro: {
prerender: {
routes: [
'/',
'/about',
'/projects',
'/contact',
'/blog', // Blog index
// Don't prerender individual blog posts with [slug]
],
crawlLinks: true,
// Fail build if prerender fails
failOnError: true,
}
},
// Configure route rules for hybrid rendering
routeRules: {
// Static pages (prerendered at build time)
'/': { prerender: true },
'/about': { prerender: true },
'/projects': { prerender: true },
// ISR for blog (regenerate every 3600 seconds)
'/blog/**': { isr: 3600 },
// SWR for API routes (serve stale while revalidating)
'/api/**': { swr: true },
// Client-side only rendering for admin
'/admin/**': { ssr: false }
}
})
Route Rules Explained
prerender: true - Generate HTML at build time, serve instantly
isr: 3600 - Generate at build time, regenerate in background every hour
swr: true - Cache responses, serve stale content while fetching fresh data
ssr: false - Client-side rendering only (useful for authenticated routes)
Impact: Build time increased by 12 seconds, but every page load became instant for static content.
Optimization Technique 2: Lazy Loading Components
I was importing every component at the top of my pages, even components that appeared far below the fold. This bloated my initial JavaScript bundle unnecessarily.
Before: Eager Loading Everything
<!-- pages/index.vue - BAD -->
<script setup lang="ts">
import HeroSection from '@/components/HeroSection.vue'
import AboutPreview from '@/components/AboutPreview.vue'
import ProjectGrid from '@/components/ProjectGrid.vue'
import BlogPreview from '@/components/BlogPreview.vue'
import ContactForm from '@/components/ContactForm.vue'
import Footer from '@/components/Footer.vue'
</script>
<template>
<div>
<HeroSection />
<AboutPreview />
<ProjectGrid />
<BlogPreview />
<ContactForm />
<Footer />
</div>
</template>
Problem: All 6 components and their dependencies loaded immediately, even though users might never scroll to the bottom.
After: Strategic Lazy Loading
<!-- pages/index.vue - GOOD -->
<script setup lang="ts">
import { defineAsyncComponent } from 'vue'
// Critical above-the-fold content: load immediately
import HeroSection from '@/components/HeroSection.vue'
import AboutPreview from '@/components/AboutPreview.vue'
// Below-the-fold content: lazy load
const ProjectGrid = defineAsyncComponent(() =>
import('@/components/ProjectGrid.vue')
)
const BlogPreview = defineAsyncComponent(() =>
import('@/components/BlogPreview.vue')
)
const ContactForm = defineAsyncComponent(() =>
import('@/components/ContactForm.vue')
)
const Footer = defineAsyncComponent(() =>
import('@/components/Footer.vue')
)
</script>
<template>
<div>
<!-- Load immediately -->
<HeroSection />
<AboutPreview />
<!-- Load on demand with fallback -->
<Suspense>
<template #default>
<ProjectGrid />
<BlogPreview />
<ContactForm />
<Footer />
</template>
<template #fallback>
<div class="loading-skeleton">Loading...</div>
</template>
</Suspense>
</div>
</template>
Nuxt 3's Built-in Lazy Components
Nuxt 3 provides an even simpler syntax using the Lazy prefix:
<template>
<div>
<HeroSection />
<AboutPreview />
<!-- Automatically lazy loaded -->
<LazyProjectGrid />
<LazyBlogPreview />
<LazyContactForm />
<LazyFooter />
</div>
</template>
How it works: Nuxt automatically detects the Lazy prefix and code-splits these components into separate chunks that load on demand.
Impact: Initial JavaScript bundle reduced from 830KB to 310KB (62% reduction). Users only download what they see.
Optimization Technique 3: Tree-Shaking Dependencies
I was importing libraries inefficiently, pulling in massive dependencies when I only needed tiny utilities.
The Lodash Problem
Before:
// utils/helpers.ts - BAD
import _ from 'lodash'
export function deepClone(obj: any) {
return _.cloneDeep(obj)
}
export function debounceSearch(fn: Function, delay: number) {
return _.debounce(fn, delay)
}
export function chunk(arr: any[], size: number) {
return _.chunk(arr, size)
}
Bundle size impact: 72KB just for Lodash (entire library imported)
After:
// utils/helpers.ts - GOOD
import cloneDeep from 'lodash-es/cloneDeep'
import debounce from 'lodash-es/debounce'
import chunk from 'lodash-es/chunk'
export function deepClone(obj: any) {
return cloneDeep(obj)
}
export function debounceSearch(fn: Function, delay: number) {
return debounce(fn, delay)
}
export function chunkArray(arr: any[], size: number) {
return chunk(arr, size)
}
Bundle size impact: 8KB (only the functions I use)
Alternative: Use Native JavaScript
For many utilities, you don't even need Lodash:
// utils/helpers.ts - BEST
export function deepClone(obj: any) {
return structuredClone(obj) // Native browser API
}
export function debounceSearch(fn: Function, delay: number) {
let timeoutId: NodeJS.Timeout
return (...args: any[]) => {
clearTimeout(timeoutId)
timeoutId = setTimeout(() => fn(...args), delay)
}
}
export function chunkArray<T>(arr: T[], size: number): T[][] {
return Array.from(
{ length: Math.ceil(arr.length / size) },
(_, i) => arr.slice(i * size, (i + 1) * size)
)
}
Bundle size impact: ~1KB (no external dependencies)
Analyzing Bundle Composition
To find other optimization opportunities, I used Vite's bundle analyzer:
npm install -D rollup-plugin-visualizer
// nuxt.config.ts
import { defineNuxtConfig } from 'nuxt/config'
import { visualizer } from 'rollup-plugin-visualizer'
export default defineNuxtConfig({
vite: {
plugins: [
visualizer({
open: true,
gzipSize: true,
brotliSize: true,
})
]
}
})
This generates a visual treemap showing which packages consume the most space. I discovered:
@vueuse/corewas importing unused composablesmoment.jswas still there from old code (replaced with nativeDateAPIs)- Multiple icon libraries instead of one unified solution
Total savings: 215KB of unnecessary dependencies removed.
Optimization Technique 4: Image Optimization
Images were my biggest payload bottleneck. High-resolution screenshots and photos loaded without any optimization.
Installing Nuxt Image Module
npm install -D @nuxt/image
// nuxt.config.ts
export default defineNuxtConfig({
modules: ['@nuxt/image'],
image: {
// Use built-in optimization
provider: 'ipx',
// Define image sizes for responsive images
screens: {
xs: 320,
sm: 640,
md: 768,
lg: 1024,
xl: 1280,
xxl: 1536,
},
// Image quality (default: 80)
quality: 80,
// Image formats to generate
formats: ['webp', 'avif'],
}
})
Before: Unoptimized Images
<!-- BAD: Large PNG loaded for everyone -->
<template>
<img
src="/images/project-screenshot.png"
alt="Project screenshot"
width="800"
height="600"
/>
</template>
Issues:
- 2.4MB PNG file
- No lazy loading
- Same size for all devices
- Legacy format (PNG)
After: Optimized with NuxtImg
<!-- GOOD: Optimized, responsive, modern formats -->
<template>
<NuxtImg
src="/images/project-screenshot.png"
alt="Project screenshot"
width="800"
height="600"
format="webp"
quality="80"
loading="lazy"
sizes="sm:100vw md:50vw lg:800px"
placeholder
/>
</template>
Improvements:
- Automatic WebP/AVIF conversion
- Responsive sizes for different viewports
- Lazy loading below the fold
- Low-quality placeholder during load
- Result: 2.4MB PNG → 180KB WebP (92% reduction)
Critical Images: Priority Loading
For above-the-fold images like hero backgrounds, disable lazy loading:
<template>
<NuxtImg
src="/images/hero-background.jpg"
alt="Hero background"
format="webp"
loading="eager"
fetchpriority="high"
preload
/>
</template>
Key attributes:
loading="eager"- Load immediatelyfetchpriority="high"- Prioritize in browser queuepreload- Add<link rel="preload">to HTML head
Impact: Largest Contentful Paint (LCP) improved by 0.9 seconds (4.2s → 3.3s just from images).
Optimization Technique 5: Client-Side Data Fetching
My project data came from a JSON API. Initially, I was fetching it during SSR, which blocked page rendering.
Before: Blocking SSR Fetch
<script setup lang="ts">
// This blocks server rendering
const { data: projects } = await useFetch('/api/projects')
</script>
Problem: Server waits for API response before sending HTML, increasing TTFB.
After: Lazy Client-Side Fetch
<script setup lang="ts">
// Fetch only on client, don't block SSR
const { data: projects, pending, error } = useLazyFetch('/api/projects', {
server: false, // Don't fetch during SSR
lazy: true, // Non-blocking
default: () => [] // Default value while loading
})
</script>
<template>
<div>
<!-- Show skeleton while loading -->
<div v-if="pending" class="skeleton-grid">
<ProjectSkeleton v-for="i in 6" :key="i" />
</div>
<!-- Show error state -->
<div v-else-if="error" class="error-message">
Failed to load projects. Please refresh.
</div>
<!-- Show actual content -->
<div v-else class="project-grid">
<ProjectCard
v-for="project in projects"
:key="project.id"
:project="project"
/>
</div>
</div>
</template>
When to Use Each Fetch Composable
useFetch() - Server + client fetch, blocks rendering
- Use for: Critical data needed for SEO
- Example: Blog post content
useLazyFetch() - Non-blocking fetch, returns immediately
- Use for: Non-critical data, faster initial render
- Example: Comments, related posts
useFetch(..., { server: false }) - Client-only fetch
- Use for: Personalized data, authenticated content
- Example: User dashboards, shopping carts
Impact: Time to First Byte (TTFB) reduced from 280ms to 45ms.
Optimization Technique 6: Manual Code Splitting
Even with automatic code splitting, some vendor libraries remained in the main bundle. Vite's rollup configuration let me split them manually.
Configuring Manual Chunks
// nuxt.config.ts
export default defineNuxtConfig({
vite: {
build: {
rollupOptions: {
output: {
manualChunks: {
// Vue ecosystem
'vue-vendor': ['vue', 'vue-router'],
// Utility libraries
'utils': ['lodash-es', 'date-fns'],
// UI components
'ui-components': [
'@/components/ui/Button.vue',
'@/components/ui/Card.vue',
'@/components/ui/Modal.vue',
],
// Heavy markdown processing
'markdown': ['marked', 'highlight.js'],
}
}
},
// Chunk size warnings
chunkSizeWarningLimit: 1000,
// Minification
minify: 'terser',
terserOptions: {
compress: {
drop_console: true, // Remove console.logs in production
drop_debugger: true,
}
}
}
}
})
Benefits of Manual Chunking
- Better Caching - Vendor code rarely changes, so browsers cache it longer
- Parallel Downloads - Browser downloads multiple chunks simultaneously
- Code Reuse - Shared chunks prevent duplication across pages
Example: If /about and /projects both use the ui-components chunk, it's downloaded once and reused.
Impact: Returning visitors saw 80% faster page loads due to cached vendor chunks.
Optimization Technique 7: Custom Performance Composables
To centralize performance logic, I created reusable composables.
Debounced Window Resize
// composables/useDebouncedResize.ts
import { ref, onMounted, onUnmounted } from 'vue'
export function useDebouncedResize(delay: number = 200) {
const width = ref(0)
const height = ref(0)
let timeoutId: NodeJS.Timeout
const updateSize = () => {
width.value = window.innerWidth
height.value = window.innerHeight
}
const debouncedUpdate = () => {
clearTimeout(timeoutId)
timeoutId = setTimeout(updateSize, delay)
}
onMounted(() => {
updateSize() // Initial value
window.addEventListener('resize', debouncedUpdate)
})
onUnmounted(() => {
window.removeEventListener('resize', debouncedUpdate)
clearTimeout(timeoutId)
})
return { width, height }
}
Usage:
<script setup lang="ts">
const { width, height } = useDebouncedResize(300)
</script>
<template>
<div>
<p>Window size: {{ width }}x{{ height }}</p>
<div v-if="width < 768" class="mobile-menu">Mobile</div>
<div v-else class="desktop-menu">Desktop</div>
</div>
</template>
Intersection Observer for Lazy Effects
// composables/useIntersectionObserver.ts
import { ref, onMounted, onUnmounted } from 'vue'
export function useIntersectionObserver(
options: IntersectionObserverInit = {}
) {
const isVisible = ref(false)
const target = ref<HTMLElement | null>(null)
let observer: IntersectionObserver | null = null
onMounted(() => {
if (!target.value) return
observer = new IntersectionObserver(([entry]) => {
isVisible.value = entry.isIntersecting
}, {
threshold: 0.1,
...options
})
observer.observe(target.value)
})
onUnmounted(() => {
if (observer && target.value) {
observer.unobserve(target.value)
observer.disconnect()
}
})
return { isVisible, target }
}
Usage:
<script setup lang="ts">
const { isVisible, target } = useIntersectionObserver()
// Only fetch data when section is visible
watch(isVisible, (visible) => {
if (visible) {
fetchProjects()
}
})
</script>
<template>
<section ref="target" class="projects-section">
<h2>My Projects</h2>
<div v-if="isVisible" class="fade-in">
<!-- Content loads only when visible -->
</div>
</section>
</template>
These composables eliminated redundant event listeners and improved CPU efficiency across my portfolio.
The Results: Before vs After
After implementing all optimizations, I ran another comprehensive audit. The improvements were dramatic:
Performance Metrics Comparison
| Metric | Before | After | Improvement |
|---|---|---|---|
| Load Time (LCP) | 4.5s | 1.4s | 68% faster |
| First Contentful Paint | 3.9s | 1.2s | 69% faster |
| Time to Interactive | 4.5s | 1.4s | 68% faster |
| Total Blocking Time | 820ms | 95ms | 88% reduction |
| Bundle Size | 830KB | 280KB | 66% smaller |
| Lighthouse Score | 67/100 | 98/100 | +31 points |
| Cumulative Layout Shift | 0.15 | 0.02 | 86% better |
Real-World Impact
Mobile 4G Connection (Tested on iPhone 13):
- Before: 6.2 seconds to interactive
- After: 2.1 seconds to interactive
Desktop (Tested on MacBook Pro M1):
- Before: 2.8 seconds to interactive
- After: 0.8 seconds to interactive
The portfolio now feels instant, even on slower connections. Visitors can start browsing immediately instead of waiting for JavaScript to download and execute.
Key Takeaways and Best Practices
After this optimization journey, here are the lessons that made the biggest difference:
1. Choose the Right Rendering Strategy
Don't default to SSR. Analyze your content:
- Mostly static? Use SSG
- Frequently updated? Use ISR
- User-specific? Use SSR or client-side rendering
- Hybrid needs? Use route rules for per-route configuration
2. Measure Before Optimizing
Run audits to identify real bottlenecks:
- Lighthouse for overall performance score
- WebPageTest for real-world device testing
- Bundle analyzer for code size issues
- Chrome DevTools Performance for runtime analysis
Don't guess—measure!
3. Lazy Load Strategically
Not everything needs immediate loading:
- Above the fold: Load immediately
- Below the fold: Lazy load
- User interactions: Load on demand (modals, dropdowns)
- Third-party scripts: Defer until after page load
4. Optimize Images Religiously
Images are usually the heaviest assets:
- Use modern formats (WebP, AVIF)
- Implement responsive sizing
- Compress aggressively (80% quality is fine)
- Lazy load below-the-fold images
- Use CDN for delivery
5. Tree-Shake Dependencies
Every kilobyte counts:
- Import only what you use
- Consider native alternatives to libraries
- Audit dependencies regularly
- Remove unused packages
6. Cache Effectively
Leverage browser caching:
- Split vendor code into separate chunks
- Use long cache headers for static assets
- Implement service workers for offline support
- Consider CDN edge caching
7. Monitor Continuously
Performance isn't a one-time fix:
- Set up performance budgets in CI/CD
- Monitor real user metrics (RUM)
- Run automated Lighthouse tests
- Track bundle size in pull requests
Common Pitfalls to Avoid
Mistake 1: Over-Optimizing
I initially tried to lazy load everything, including critical components. This actually slowed down the initial render because users saw loading skeletons for above-the-fold content.
Solution: Only lazy load below-the-fold or on-interaction content.
Mistake 2: Ignoring Mobile Performance
My optimization focused on desktop. When tested on mobile 3G, performance was still poor because JavaScript execution is much slower on mobile devices.
Solution: Test on real mobile devices or use Chrome DevTools mobile throttling.
Mistake 3: Breaking SEO
By moving all data fetching to client-side, I accidentally broke SEO for my blog posts. Search engines couldn't see the content.
Solution: Keep SEO-critical data in SSR/SSG, move non-critical data to client-side.
Mistake 4: Not Setting Performance Budgets
Without explicit limits, bundle size gradually increased as I added features.
Solution: Configure bundle size warnings in Vite:
// nuxt.config.ts
export default defineNuxtConfig({
vite: {
build: {
chunkSizeWarningLimit: 500, // Warn if chunk > 500KB
}
}
})
Tools and Resources
Here are the tools I used throughout this optimization process:
Performance Auditing
- Lighthouse (Chrome DevTools) - Overall performance scoring
- WebPageTest - Real-world device testing
- PageSpeed Insights - Google's perspective with Core Web Vitals
Bundle Analysis
- rollup-plugin-visualizer - Visual bundle composition
- webpack-bundle-analyzer - Alternative for webpack projects
- vite-bundle-visualizer - Vite-specific analysis
Image Optimization
- @nuxt/image - Automatic image optimization
- ImageOptim - Batch compress images before upload
- Squoosh - Browser-based image compression
Monitoring
- Chrome DevTools Performance - Runtime analysis
- Vercel Analytics - Real user monitoring (if using Vercel)
- Google Analytics 4 - Core Web Vitals tracking
Conclusion: Performance is a Feature
When I started this optimization journey, I thought performance was just about making numbers look good. But after seeing the real impact—visitors spending more time on my portfolio, lower bounce rates, and better user feedback—I realized performance is a feature, not a metric.
A fast website feels professional. It shows attention to detail. For a developer's portfolio, it's a demonstration of skill.
The techniques I shared here aren't just applicable to portfolios. Whether you're building an e-commerce site, a blog, or a SaaS application, these principles remain the same:
- Choose the right rendering strategy for your content
- Load only what's needed when it's needed
- Optimize assets ruthlessly
- Measure continuously and iterate
My 4.5s → 1.4s improvement didn't happen overnight. It took a weekend of focused work, but the impact has been lasting. Every new visitor gets a blazing-fast experience, and I sleep better knowing my portfolio makes a great first impression.
If your Nuxt 3 site feels slow, start with one technique from this guide. Run a Lighthouse audit, identify your biggest bottleneck, and tackle it. Then move to the next one. Performance optimization is an iterative process, and every improvement compounds.
Remember: every millisecond counts. In the attention economy, you lose users by being slow. Make speed your competitive advantage.
Happy optimizing, and may your Lighthouse scores always be green! 🚀
If you found this guide helpful and optimized your own Nuxt 3 project, I'd love to hear about your results! Share your before/after metrics with me on Twitter or connect on LinkedIn. Let's build a faster web together!
Support My Work
If this comprehensive guide helped you optimize your Nuxt 3 application, understand SSG vs SSR tradeoffs, or dramatically improve your Lighthouse scores, I'd really appreciate your support! Creating detailed, well-researched technical content like this takes significant time and effort. Your support helps me continue sharing knowledge and creating more helpful resources for the developer community.
☕ Buy me a coffee - Every contribution, big or small, means the world to me and keeps me motivated to create more content!