API Authentication in a Privacy Focused Desktop App
Balancing the need for information with privacy and security.

Projects often have competing priorities and limited resources to meet them, how do you come to a compromise that balances multiple goals without spending inordinate time or money?
While every project is different, this is how I addressed the immovable rock of high quality information meeting the unstoppable force of no log-ins required.
The goal of this program was to be both a library of games in your backlog to play-by providing general information, and a help in prioritizing what to spend time on. It also should help you understand the games you like/dislike and why, with optional reflection and media literacy prompts. To meet those goals it needs high quality and broad information - which I’ve found on IGDB.
At the same time, I know that I get frustrated when it’s required to set up an account for a single non-critical service. This is especially important when my target audience would be using the app to maximize their already limited time.
Getting Good Information
While the information on IGDB seems excellent, to use it you need to have an account with Twitch.tv (just “Twitch” from here), and be set up as a developer to get an API Key. This isn’t the easiest thing to do, and goes against my no log-ins goal.
As I was looking into it more I saw that Twitch had different ways to authenticate-maybe one of those would let me get access for the user so they wouldn’t need an account.
Broadly, these are the approaches I could use: first was an option targeting set-top boxes, which would prompt the user to enter a confirmation code. Then there’s a method for locally run web or mobile apps, which needs an API Key. Finally there was what I used during testing, which was aimed more at servers but didn’t tie into the Twitch side of things.
Initially it seemed like the set-top option might let users get access through my account, with the code acting sort of like a Captcha. Sadly, no matter what I tried it turned out that it still required a Twitch account.
It seemed like the only options were:
- Include my API authentication with the program - which could be a security issue, and doesn’t seem be a generally encouraged practice
- Don’t use the IGDB API at all - loosing a big feature and a key to making the program useful for the average person
- Require the user to enter their own API information and otherwise block API use - while not good for usability at least basic functionality is preserved
Though it was disappointing, option #3 seemed like the only workable solution.
That was until I saw this in the FAQ:
3. Am I allowed to store/cache the data locally?
Yes. In fact, we prefer if you store and serve the data to your end users. You remain in control over your user experience, while alleviating pressure on the API itself.
A Completely Different Solution
What if instead of requesting data from IGDB for every search, I had a pared down version stored locally as the main source?
There would be issues to overcome, but if there was a way to get a set of data that covers the majority of expected games it would change everything. That would remove most of the need to have an API account, and it would almost completely remove the need for Internet access at all.
The cache needed to be focused or else it would balloon in size and make searching difficult, but what to cut? Because the goal was to help you pick and finish games you own, any games that aren’t out yet could go. Then if there was a way to get the most popular games, just grabbing the top 5-10k should be enough.
After a little research I found that they do have a popularity tool that might help. At the time there were four different ways that each game was ranked:
- The game’s IGDB Page Visits
- IGDB users saying they “Want to Play” the game
- Users saying they “Are Playing” the game
- Users saying they “Have Played” the game
Given the program goals “Played” and “Want to Play” seemed to be exactly what I wanted. But first I needed to see what we were working with.
As a test I got the top ~50,000 game listings for each of the four categories, which revealed:
- Very few games rated very highly on all four metrics, or had a score in all four ratings
- Page Visits covered most games but seemed to be “noisy” with top results often going to unreleased games
- Want to Play was the only category with over 50% of the ~60K+ games I ended up with
- Played was the next closest at just under 50% of games
- Playing had the lowest coverage by far
Creating a popularity score
By combining the four ratings using a weighted scale it should be possible to get a good idea of the most popular games. Several tests using different weights ended in having “Want to Play” and “Played” having more weight, with “Playing” and “Page Visits” covering the final third of the ranking.
With a custom popularity scale in place the next step was to decide how many games to cache.
IGDB says it lists 300K+ games, and my popularity tests gave me scores for ~60K, but that was still too many. From what I know about the games industry, it seemed like a small fraction of all games released would account for most of what users would expect.
I started by looking at what came in around the 15K mark for popularity, looking for things I had heard of, or with titles that didn’t seem to be extremely niche. Basically a gut check on whether a reasonable amount of people could have this game. What I saw looked good to me so I went with 15K as a start, resulting in a final cache of just over 10K listings after dropping unreleased or otherwise not useful listings.
In the end I also included an option for the user to put in their own API information and search IGDB directly - since the cache can’t include everything.
Overall this was perhaps the most difficult single issue faced on the project to date. The result was a smashing success: a comprehensive local cache which provides high quality information while also respecting users who don’t want to create new accounts or give permissions to more apps.
Lessons Learned
- I’ve mentioned documentation already, but this time it’s reading the documentation. I didn’t bother reading the FAQ for quite a while, and almost didn’t at all - which would have completely changed how the program works and how accessible it is. Taking a broader perspective on what you’re trying to figure out, and at least skimming through everything you can related to that can lead to solutions you wouldn’t have imagined otherwise.
- Spend time early on designing your quality control checks, so that they’re small enough to repeat but still comprehensive. For example my initial tests on individual popularity scores and weighted rankings were done on ~50 entries. That way I could adjust weights and rerun the test quickly, without risking overburdening the server or getting banned temporarily.
- Don’t be afraid to experiment. I’m no expert but I learned a lot about API authentication from the failed attempt at using the set-top approach. Likewise, by pushing and combining a custom popularity metric with grabbing listings in bulk, the program now has what seems like a novel and robust solution.
- Stick with your vision, while being realistic about limitations. While being able to directly pull information from IGDB was a selling point, I felt that not requiring any log-ins was a core aspect of this program’s identity and it would be better to add friction than to compromise if possible. With determination and effort it turned out that both goals were achievable, something I wouldn’t have realized if I’d given up.
Next Time
For my next post I’ll change up the focus a little bit, and talk about Kanban and the Jira platform as I’ve been using them.