On March 27, 2025, the AWS User Group Toronto came together for another powerful session of cloud learning and community networking. The meetup brought together professionals from across the AWS ecosystem—builders, architects, engineers, and curious learners alike.
The theme of the evening was clear: building modern cloud-native solutions and mastering data architecture at scale. We explored two very relevant and practical topics—serverless application design using AWS Application Composer and Amazon Q, and data evolution through the Lakehouse architecture.
Session 1: Simplifying Cloud Deployments with AWS Application Composer & Amazon Q
Speaker: Bansi Delwadia, Technical Project Manager @ ScaleCapacity
Overview:
This session was a hands-on walkthrough of building a production-ready serverless application from scratch using AWS Application Composer and Amazon Q Developer.
As a Technical PM working on enterprise-scale solutions, I focused on showing how these tools can help simplify the design, deployment, and development process—bridging the gap between architects and developers.
🔹 Live Demo: Building a Serverless App from Start to Finish
We tackled a full-stack example: creating an API-based service to manage items in DynamoDB using Lambda functions connected via API Gateway. The demo covered:
Designing the architecture visually in Application Composer inside VS Code using the official extension. We modelled the system with three Lambda functions connected to REST endpoints.
Generating CloudFormation templates automatically with Composer and deploying the stack using the AWS SAM CLI.
🔹 Automating Code with Amazon Q Developer
We then dove into Amazon Q Developer, now integrated within IDEs like VS Code and GitLab Duo.
Using natural language prompts, Amazon Q helped us:
-
Auto-generate handler logic for:
- POST /items (Create Item)
- GET /items/{id} (Fetch Item)
- DELETE /items/{id} (Delete Item)
Add logging and error handling
Suggest and write unit tests
Follow best practices (e.g., idempotency, input validation)
The focus was on rapid iteration and developer productivity, showing how AI tools can cut boilerplate and let you focus on business logic.
Session 2: The Lakehouse Effect—Transforming Data Storage and Analytics
Speaker: Anna Kaur, Solutions Architect @ AWS
Overview:
Anna’s session focused on the rise of Lakehouse Architecture and its role in unifying structured and unstructured data into a single data plane—eliminating duplication, reducing cost, and simplifying analytics.
🔹 Key Concepts Covered:
Why Lakehouse?
Traditional data lakes offered scale, but lacked schema enforcement and transactionality. Warehouses offered fast SQL but couldn’t handle unstructured data.
The Lakehouse model provides the best of both worlds—data lake scalability with data warehouse features like ACID compliance and schema evolution.
Core Components & Technologies:
- Amazon S3 as the foundational storage layer
- Apache Iceberg / Hudi / Delta Lake for table-level abstraction with time travel and schema enforcement
- Amazon Athena, Redshift Spectrum, and EMR for querying and processing
- AWS Glue and Lake Formation for metadata management, ETL, and access control
🔹 Real-World Use Cases:
- Multi-team analytics on shared datasets across regions
- Serverless querying of large S3 datasets using Athena
- Using Amazon Redshift as a consumer of Iceberg tables directly from S3
- Real-time updates via streaming ETL pipelines into Lakehouse formats Anna showcased architecture diagrams and reference designs illustrating how enterprises are transitioning away from batch-heavy ETL pipelines to unified storage+analytics systems.
🔹 Audience Questions:
- “Can Application Composer handle updates to existing stacks, or is it only for new deployments?”
- “How do you manage secrets or environment variables when using Application Composer in a team setting?”
- “Is Amazon Q Developer context-aware of existing code or does it generate from scratch every time?”
- “Can Q Developer be used in CI/CD pipelines for auto-generating handler logic?”
- "How does Iceberg handle schema evolution without breaking downstream jobs?"
- "Can Redshift Spectrum now write directly to S3-backed Iceberg tables?"
- "How does Lake Formation enforce column-level access across Glue and Athena?"
These sparked an engaging discussion around data governance, multi-tenant analytics, and query performance tuning for large-scale environments.
👥 Community at the Core
The meetup wasn’t just about knowledge sharing—it was about connecting as a community.
We had:
- Professionals from cloud-native startups to enterprise IT teams
- First-time attendees curious about AWS
- Regulars diving deeper into topics like serverless, AI/ML, and DevOps
Many participants stayed back after the talks to exchange ideas, share projects, and even sketch architecture on napkins (yes, really).
And of course—light snacks, drinks, and plenty of laughter.
🔗 Get Involved
If you're in Toronto and passionate about AWS, this is your community.
Join us at our next AWS User Group Toronto Meetup:
📍 Meetup Page
🔗 LinkedIn—AWS User Group Toronto
We’re here to learn, share, and grow—together.
💬 What About You?
Have you tried Application Composer in your projects?
Are you migrating to a Lakehouse model for data engineering?
What tools or practices have helped you build more efficiently in AWS?
I’d love to hear your experiences—drop a comment and let’s start a conversation.
—
Bansi Delwadia
AWS User Group Toronto Leader | AWS Community Builder
Top comments (0)