Nya.GG - Videohosting

October 22, 2025

Spring Boot JSF Java Nya.GG
← Back to Posts

Introduction

The next big update to Nya.GG is finally here, and it is quite a big one indeed!

Nya.GG is one of my all-time favourite domains because of its cute and salient name. I bought the domain all the way back in 2018, and I genuinely think it’d be difficult to find a similarly short domain in 2025. Over the years, I’ve hosted a variety of projects on Nya.GG, but I always considered it to primarily be a candidate for personal content distribution. Now, I eventually settled on this website, vanilla.sh, for showcasing my projects and satisfying my urge for writing, which left Nya.GG mainly as a platform for media delivery.

I’ve built small APIs in the past that allowed users to upload and display images via REST endpoints, but none as feature-rich as Nya.GG - even though the UI side is lacking on the image upload module, the infrastructure is quite sound, with images stored in a cloud-based object store, local caching, and account creation with role-based permissions. To explain why I did what I did next, let’s take a step back.

Medal.TV And The Plays.TV Purge

In a previous post, I described how I switched to Linux as a daily driver for my home PC, and one of the big drawbacks was losing access to Medal.TV. Medal.TV is a clipping tool that lets you record a replay buffer, clip the created videos, and upload them directly to their platform. Medal.TV is honestly a great tool, and even now, I don’t have anything bad to say about it. Well, except that they don’t have a mass export button, but they don’t stop you from scraping the videos either and keep delivering them with good bandwidth, which is more than you can ask of a company, to be honest. Medal.TV works just as well, if not better, than ShadowPlay, provides a variety of options for editing your clips, and allows you to upload them directly, free of charge, with no expiration date. If I were on Windows, I’d definitely still be using Medal.TV religiously.

However, the boomers among us might remember a little website called Plays.TV. Plays.TV was essentially a tool similar to Medal.TV, but was predominantly used in the earlier half of the 2010s and subsequently shut down in 2019. Not aware of this information, I lost a lot of clips as I couldn’t download them anymore, and I want to avoid having this happen again.

As a fun little side note, the Medal.TV team is aware of Plays.TV, too. In fact, Medal.TV bought the Plays.TV domain and has it redirect to their website. They supposedly even tried to recover the video data, but were unable to acquire it. Further information is outlined here. So, Medal.TV does seem to care about its users’ media, and I respect that a lot. But I missed the Plays.TV deadline too - what’s to say it will be different next time? The only way to own my media is to control the storage medium. And the only way I can do this is by hosting my own video uploading service - and that’s exactly what I did.

Extending Nya.GG With Video Hosting - The Problems

Videos are more complicated than images - like, a lot more complicated. For starters, images are small. In previous iterations of my image hosting services, I even went as far as to store them directly on the VPS. I mean, it would take ages to fill up a standard hard drive with screenshots. Videos on the other hand…

That’s not just a problem for storage, but also for downloading and uploading them. It takes much longer to serve a video than an image. Since this was my first time working with videos on the web in this capacity, I learned about the range headers for the first time here. These allow a user to only request parts of the video, for example, for streaming. Incidentally, that’s also how those “pause and resume” download launchers work. The more you know.

So far, so good, but what about on-application caching? Caching images is easy, can be done with Spring-compatible libraries like Caffeine, and probably doesn’t take up an insane amount of resources, assuming traffic is relatively light (as it tends to be for personal projects). But caching even just ten videos in memory is a rather large strain in comparison. Cloud providers tend to have their own service offerings for this problem, such as Amazon CloudFront. But is this really ideal for a personal project?

Finally, when you share a screenshot with your friends, you probably want to share the whole picture. Or, equally likely, whatever software you used to take the screenshot to begin with already took care of limiting the screenshot to the relevant section. Even the basic snipping tools on pretty much every operating system have this functionality these days. Maybe you even used Greenshot to add some red arrows for that original office feeling. But with videos, clipping them to a specific duration is often an afterthought. Services like Medal.TV do offer clipping functionality, but regular replay buffers like Shadowplay, the OBS Replay Buffer or the GPU Screen Recorder are just that - replay buffers that store the last x number of minutes of your gameplay as raw video. For this reason, clipping upload services like Streamable often offer this functionality directly on their website. However, this means that, unlike my image hosting service, where users can just upload their screenshots via a simple REST API, the video hosting component needs to have a full frontend UI.

The AWS Killswitch Experiment

So let’s start with cloud hosting. Did I end up using AWS CloudFront? No. I simply bought a DigitalOcean bucket for 5€/month, and I use a free Cloudflare proxying front and long cache-control headers to decrease traffic to the object store and discourage repeated download attempts.

I respect the AWS suite, and I would love nothing more than to learn to use it properly. But as a single developer, it just isn’t feasible. I wanted to, early on, and tried to look into possible ways to set a budget hard limit, but it’s just a complete mess. Some contraption of a lambda function with permissions to shut down certain services, which requires setting up specific privileges, reacting partially to statistic reports on my S3 container that were, in itself, billable, and manual changes to the function code everytime I were to add a new AWS service, was the best setup my research procured. It’s insane. And in the end, I ended up just closing my AWS account entirely, to avoid later being faced with any runaway function.

Truth be told, as much as defenders of the space will argue about the principle of AWS being infinite scalability with basically no questions asked, it’s hard for me to find any possible reason why there is no simple failsafe in place for personal accounts. As I’ve stated in other articles, the internet just isn’t what it used to be anymore, and automated traffic and malicious actors creep into every nook and cranny of every personal project. Even if I were proficient in any of the various cloud consoles, I’m not taking the chance of going three grand into debt overnight because someone’s vibe-coded Python app decided to spam request my object store. And I KNOW that’s not a far-fetched thought because MY vibe-coded Python apps sometimes repeatedly scrape at OTHER people’s object stores. And as much as I like to code, I’m not trusting my own evening hobby coding with the risk of personal bankruptcy if my plea to a bunch of Silicon Valley goobers falls on deaf ears.

So for now, my cloud service knowledge will unfortunately remain limited to accessing compatible APIs of prepaid fronts. It’s a shame, but one that I will be very vocal about being avoidable easily, if large cloud providers simply didn’t bury their heads in the sand and came up with solutions that allow private users to actually learn their platform. I understand that their business is enterprise, but isn’t this stifling growth, if interested developers like me can’t actually learn to use the platform and thus increase the likelihood of business adaptation? I suppose if you’re only targeting the top 5% of companies, it doesn’t really matter.

Anyway, the DigitalOcean bucket is working great! It also uses the same underlying API structure, which actually means it can be accessed via Amazon’s own SDKs. MinIO, which I used for local development, shares this API too, meaning local testing is a breeze. I guess the only useful thing AWS ended up producing for this setup was the interfaces for its replacements. Ironic.

Designing The Frontend

Those who upload clips online will know those little trimmer bars with draggable handles and a video preview. Honestly, I wasn’t quite sure how to design one of them - it looked pretty daunting. But as I started getting to work hacking together some CSS and plain old JavaScript, things started just kinda falling into place! It was a good exercise in doing something a bit more complicated on the frontend side than the usual grid layouts and containers, and it motivated me to look into a CSS course to tighten up my frontend skills a bit in the future.

A user can drag-drop or select a video for processing. A user can drag-drop or select a video for processing.
The video preview is shown as the user determines the clip length. The video preview is shown as the user determines the clip length.
The library displays videos in a paginated grid with thumbnails. The library displays videos in a paginated grid with thumbnails.

Processing The Video

“FFmpeg wrapper” is a pretty well-known meme whenever people boast about writing any software related to video encoding, and Nya.GG is no different - for thumbnail generation and processing videos (clipping and converting them to MP4 files), FFmpeg is used in the background via a wrapper library. This process runs on the application in separate threads. This is an acceptable middle ground for my personal use case, since I don’t expect the load on the application to be very high, and only allow trusted users who won’t, accidentally or otherwise, end up DDOSing my server. For enterprise scaling, it would of course make more sense to run a separate scalable service that reads from a message queue, but that type of architecture is a bit overkill for a personal application. The thread-based approach still allows parallel uploads of multiple videos and retains the responsiveness of the frontend while doing so.

Once the video has started processing, the user is redirected to the eventual video player URL. Here, a placeholder text indicates that the video is still being processed, prompting the user to return later. In the case of a crash or error, temporary files are cleaned up on the server side, and the video upload will be marked as failed. When the user visits the URL, he will be informed to try again.

This upload is still being processed. This upload is still being processed.
This upload has failed. This upload has failed.

A periodic job cleans up any videos that did not leave the processing status after a set amount of time, which may happen in cases of application restarts or crashes. In this case, the video status is also set to “failed”.

Cloudflared Proxying For Rich Embed Testing

There was one feature in this project that I desperately wanted to get to work - rich embeds of videos on social media and chat applications, primarily Discord. Specifically, I wanted an external application to be able to pull the title and video URL directly from the website.

A Discord rich embed. A Discord rich embed.

Discord, and other social media applications, use the Open Graph protocol to infer such information. These tags can be added as metadata to a website’s head. Specifically, Nya.GG sets the following tags:

  • twitter:card=player
  • twitter:url=[URL of the web player]
  • twitter:title=[Title that is displayed in rich embed]
  • twitter:image=[Thumbnail URL]
  • twitter:player=[URL of the raw video file]
  • twitter:site=[Username of the user]

As a side note, it is once again amazing to see how Twitter, once so prevalent in social media, data aggregation and text research that they basically extended OpenGraph tags with their own, turned into the irredeemable right-wing hellscape that it is now. But I guess that’s a story for another time.

But, how do we test this rich embed functionality in development? We’re not vibe coders, so we know that just posting a localhost link into Discord won’t have the desired effect. That’s where cloudflared comes in. Cloudflared is a service that is, unsurprisingly, provided by Cloudflare. Basically, it allows you to set up a tunnel from any networked device that accepts traffic for a domain managed by Cloudflare. With the help of a testing domain (although you don’t even need that in most cases, but as Nya.GG works with subdomains, their freely provided subdomains are not enough), I was able to pretty easily tunnel my development environment to the web and test the rich embed functionality firsthand. I’d go into detail on this process, but I will probably do so in another article, both because I want to highlight this knowledge for later, and because this article is already way, way too long.

A look at the current video player. A look at the current video player.

A Note On Spring Session Management

A final note on Spring sessions before we’re finally done! When I developed Nya.GG, I had a bit of a problem - I was designing the frontend, but this isn’t a cool Angular frontend that automatically reloads on changes. This is Spring and JSF! Everyone’s favourite legacy stack, and I’m not about to get myself a personalised quote from JRebel. I could get an automatic recompile to work pretty easily, but my session information kept getting lost when the application restarted. To mitigate this, I had to store my session information in the database, which allowed it to persist through restarts.

On a note that you probably assume is not as unrelated as I thought it was, my database compute usage started spiking heavily around this time. Hmm! Well, when I took a look at the monitoring metrics, the culprit became pretty obvious:

I wonder why I keep running out of compute... I wonder why I keep running out of compute...

So in the end, I disabled database-persisted sessions in production, since I don’t really care about individual sessions persisting through restarts there. But honestly, this is just another reason not to write a Java frontend in the year of our lord 2025. Again, with enterprise architecture, this would probably be a great use-case for a key-value store like Redis, but that would honestly be more effort than it’s worth here.

Upcoming Projects

Nya.GG has some more overdue updates, mainly a more modern redesign (potentially ditching PrimeFaces altogether) and a rework to the image upload module, as well as the ability to restore thumbnails when local data becomes lost. However, I tend to be more motivated and deliver better work when working on projects in small bursts.

So, before doing more work on Nya.GG, I want to prioritise my other primary project - this website! vanilla.sh has proven to succeed where previous portfolio projects of mine have failed - thanks to a simple and robust tech-stack with version-controlled content, and a backlog of topics that I want to write about. You may be wondering why the current version of vanilla.sh is so basic. Well, it’s because in the past, I often gave up on personal websites like this one, so I didn’t think it was necessary to put too much effort into the site originally.

However, recently I’ve been pretty enthusiastic about the idea of building vanilla.sh up as a proper portfolio page, and that requires giving it more thought than just a grid layout and a navbar. I’ve also recently started getting more into different types of storytelling, whether that be single-player video games, movies, or books, and I would love to write short reviews about some of my favourites on this website and spread the word about them! So stay tuned for some more personal content and a full modern redesign of vanilla.sh!

This will probably take up the majority of my development time in the near future, as I need to also produce the content that lives on this website (my integrity unfortunately prevents me from writing AI slop), and I’ll have to settle into a new tech-stack at work soon. Still, I hope to have the first updates soon.