Hi there👋

Harshal here, welcome to my personal blog

Building a Cloud-Ready Expense Tracker: My Spring Boot, Docker & AWS EC2 Journey

I set out to build Treso, a secure and modern expense tracking REST API. I wanted to go beyond simple CRUD apps and dive deep into real-world backend engineering practices including secure authentication, containerization, and deploying to the cloud. This post is a summary of my key learnings and insights not a step-by-step tutorial. If you’re curious about the code, check out my GitHub repo here. 💡 Why This Project? Practice professional backend skills: Not just code that “works,” but code that’s production-ready and maintainable. Experiment with modern Spring Boot features: Security, validation, OpenAPI docs, metrics. Learn cloud deployment: Run my app on AWS EC2 like real-world services. Master Docker: Build once, run anywhere, test with real databases. 🌱 Learnings from Spring Boot 1. Secure Authentication with JWT Spring Security isn’t just about “login” it’s a whole framework for role-based access, request filtering, and best practices. Implementing JWT taught me about: Stateless session management Custom authentication filters Using @ControllerAdvice for clean error responses 2. Data Validation and Exception Handling Using Bean Validation (@Valid, @NotNull, etc.) makes your API more robust and self-documenting. A single global exception handler gives clients clear, consistent error messages no more scattered try/catch blocks. 3. Building for Scale and Maintainability DTOs and service layers are worth it, even in a small app. Using JPA with clear entity relationships (user ↔ expenses) made future feature expansion (analytics, sharing, etc.) much easier. 🐳 Docker: Development and Deployment Multi-stage Dockerfiles are a game changer: fast, clean images without leftover build tools. Environment variables (12-factor style) let me use the same image everywhere from local dev to cloud. Running both the app and Postgres in containers (with Docker Compose) gave me confidence my code would “just work” anywhere. Tip: Debugging Docker networking taught me a lot about how containers talk to each other localhost inside a container is not the same as on your laptop! ...

August 5, 2025 · 3 min

Building a Serverless Image Upload and Processing Pipeline on AWS

I recently built a serverless image upload and processing pipeline on AWS, and this post outlines the architecture, services used, key learnings, and tips that helped me along the way. 🚀 Project Overview The goal was to build a system where users can: Upload an image via an API Automatically process the image using AWS Rekognition Store extracted metadata in DynamoDB Retrieve metadata through an API Everything runs serverlessly, using AWS-managed services. ...

June 19, 2025 · 3 min

How I Built This Blog Using AWS and Hugo

I just launched my personal blog! 🚀 In this first post, I want to share how I built it from scratch using Hugo and AWS services like IAM, S3, ACM, and CloudFront — all behind my own custom domain: nublog.cloud. 🛠️ Tools & Services I Used Hugo – static site generator for blazing-fast content Amazon S3 – for hosting the static site AWS Certificate Manager (ACM) – for issuing a free SSL certificate Amazon CloudFront – CDN for HTTPS support and global delivery IAM – to manage secure access to AWS services Namecheap – for my custom domain nublog.cloud ⚙️ Key Steps I Followed 1. Set up Hugo locally hugo new site my-blog cd my-blog git init git submodule add https://github.com/adityatelange/hugo-PaperMod.git themes/PaperMod echo 'theme = "PaperMod"' >> hugo.toml 2. Added a post (like this one!) hugo new posts/my-first-post.md 3. Built the site hugo 4. Synced it to S3 aws s3 sync ./public s3://my-blog-bucket --delete --profile myprofile 5. Created an ACM certificate in us-east-1 Used DNS validation via Namecheap and added the CNAME records. ...

June 16, 2025 · 2 min