Hey there, friends and tech enthusiasts! It's me, the guy behind, and I wanted to give you a sneak peek into what's been brewing. I've been pondering a change of course for Anooserve, and it's all about diving into the fascinating realm of AI models, specifically large language models and generative models.

Access for All: Free Tier and More

We all love free stuff, especially when that stuff typically has a high barrier to entry, I've decided to introduce a free tier. This tier will include a limited number of inference API calls per day for standard models. My goal is to ensure that AI isn't just a luxury for those with deep pockets but something that anyone can tap into.

For those of you who like to tinker and customize, I've got something special in store. I'm offering paid tiers with options for the creation of Low-Rank Adaptation (LoRA) and fine-tuned models. This means you can make AI your own, tailoring it to your specific needs, or teaching it to be weird. You do you.

A Modern Tech Stack

One of the most significant aspects of this transformation is our adoption of a modern and robust tech stack. I've revamped the architecture to make Anooserve not just functional but highly performant and user-friendly.

Laravel Backend

I have been using the Laravel framework for nearly a decade at this point, so it will come as no surprise that it remains at the core of the new architecture. Laravel is renowned for its elegant syntax and developer-friendly features, making it a perfect choice for creating powerful web applications.

Laravel provides us with a solid foundation for handling complex tasks like user authentication, database interactions, and API development. This means we can focus on what really matters.

Nuxt/Vite Vue 3 Frontend

On the frontend, we're taking full advantage of Nuxt, powered by Vite and Vue 3. This trio ensures a snappy and responsive user interface. Nuxt simplifies the development process with its server-side rendering capabilities and optimized loading, providing users with a swift and engaging experience.

We'll also be making use of Vercel to host the frontend and SSR components on their edge caches, so (hopefully) expect to see good performance no matter where in the world you are.

And, of course, it'll all be styled with a customised Tailwind stylesheet and Primevue component library.