Creating BuiWatch iOS app (post in blog of projects?)

In the early days of the iOS app goldrush (2011) I worked on the interface for an app called ‘BuiWatch’ (a loose take on the term BayWatch for predicting rainclouds). The core function of the app was providing continuous weather information about upcoming rain, pinpointed for your location. At the time nothing like it existed (yet). To achieve this we needed access to precipitation radar data and find a way to interpret this data and translate it into common concepts of rainfall.

iPhone 4 scherm+BWicon+tekst.jpg

While my friend Rob van Maris handled coding and general research, my tasks were coming up with an attractive and useful UX/UI and also general research. Using Skeuomorphism –introduced by Apple for iOS– as a design language was all the rage back then. The iPhone 4 (including iOS 4) took maximum advantage of this with a retina display containing a whopping 960 x 640 pixels!

Not that many specialised tools for creating and prototyping app artwork existed so Adobe Illustrator had to fit the job. It had just been upgraded with a feature to preview it‘s native vector graphics as they would appear when rendered in device pixels. Thus saving many rendering roundtrips to Adobe Photoshop, because in order to mimic Apple’s high standard in UI design every interface element had to fit the target screen pixel-perfect in order to avoid bleeding over into adjacent pixels and appear blurry. Before this huge timesaver was added, every design iteration needed endless pixel tweaking in order to achieve the holy grail of pixel-perfectness. Back then scripted exporting of UI assets was not yet implemented though, mitigating the lack of headaches from pixel tweaking.

For prototyping interactions and taskflows I resorted to Apple’s presentation app Keynote which could handle custom document sizes, linking and featured animated transitions which were similar to those seen in iOS 4. The best part was that it was free and also available for iOS which meant I could author on my iMac and then preview and test the simulated app interactions live on the iPhone screen.

We endlessly discussed and experimented with features, designs, art-styles and whatever aspect of app design we could think of. Remember, back then there wasn’t as much common ground in app designs as is nowadays. And of course the release schedule we envisioned was way to optimistic for our ‘little’ side-project. We fought our own induced feature-creep, debated the free, payed or freemium revenue model, squinted our eyes to rate readability and information density and exploited our friends and family for UI beta-testing.

Early on we decided we wanted to display our main data as a continuous looping animation in order to give the app a live feel and sense of urgency. It became a key element in the interface for which I designed a temporary and a retro implementation. This lead me on a detour to see how I could mimic the plastic grain texture used by those old Motorola police ham radio‘s. Because when it’s your own project, there are no limits. Also at that time we both had the bare bones alpha version of the app on our phones to log every prediction and compare it to our notes describing what kind of precipitation we were experiencing in real life.

In the end the app went through a relatively modest 16 code revisions and 7 or 8 interface designs but was never released so we missed out on becoming app millionaires. Why? Well for two reasons. First we underestimated the difficulty in interpreting precipitation radar info. It arrives as an 8bit data stream with location and time stamps. But how do you translate this numerical data (0-255) into reliable common language statements about precipitation? Where’s the cut-off between drizzle and rain? What to do when there is moisture in the sky above you but you feel no raindrops? The data does not discern.

Secondly, we decided we wanted a professional app targeted at people who depend on reliable precipitation info. In order to achieve this we could not just continuously scrape the data from existing weather sites and risk being cut-off. Also Google allowed only a maximum number of location queries per timeframe, so users could maybe be excluded from updates if all location queries came from the same IP-Address (i.e. our server). I’m not a programmer so this may not be correct, it’s how I recall the issue. And I’m sure, this could have all been solved with some extra code. But it was early days, we didn’t have a lot of experience creating mobile apps and worked only part-part-time on the project.

We did research the cost and consequences of investing in a commercial connection to Dutch metereological institute KNMI. This would also require a dedicated co-located and scalable server setup. But at €8000+ prepaid per annum for just the weather-data we hesitated to take the risk. Hunting for partners/investors was not on our mind and we lacked business savvy or knew how to protect our intellectual property. Also the iPhone was becoming popular, but not yet mainstream.

In hindsight I’m not sure it was the right decision. But at the time things were different. The app economy was not yet strong, and users were not yet sophisticated enough to discern between free options using scraped data versus a payed solution. So we feared it would take more than just a programmer and a designer to professionally market our app to potential customers. All of which would multiply the risks and financial aspects of the project.

So now I’m left with some UI/UX and animation experience, fun memories, lots of old artwork files and a blogpost.