It’s project kickoff time, and you’re having a conversation with your client about what form the application will take:
Client: I’m thinking mobile app. Our users will definitely be using this on the go. Dev: Sure, we can do a native mobile- Client: Mind you, we’ll want a desktop version too. We’ll need to use it from the office. Dev: Okay, well, a responsive web app- Client: One of our priorities is definitely ease of access – we’ll need the app accessible from the home screen, ’cause who has time for typing in URLs, amirite? We’ll also want it to be useable offline, whenever people want to. Dev: Ye-yeah, no problem, we can wrap your web app in a webview, bundle it up as a native app, and- Client: Yeah, cool. So they’ll just be able to go to the site and install the app, right? Dev: Well, no, they’ll have to download it from the appropriate App Store. Client: Eh, that’s a no-go – this is internal only, we can’t have it showing up in the app stores. Didn’t I make that clear from the start? Dev: …
The term your client was looking for is Progressive Web App – an application that acts like a responsive web app when accessed from the browser on any device, but can be installed to mobile devices like a native application. The link above makes the case for PWAs, so we won’t belabour the point – if you’re still here, it’s because you’re convinced it’s time to build a PWA.
Part 2 of the Blog Series: Cloudy with a Chance of VMs: Scaling Up & Out with Azure explains how to configure an Azure Load Balancer and compares Manual VM scaling to Auto-Scaling via Azure Scale Sets.
Backend Pool configuration
Cloud Computing shines in a cost-benefit analysis; virtually unlimited resources are available at a moment’s notice, and resources must only be paid for if and when they are needed. Unlike dedicated servers, Cloud-based resources scale quickly & automatically to respond to peak loads. They can also provide fault tolerance via replication both within and between data centers. (more…)
When Art & Logic started doing business in 1991, we were a fully remote company that pre-dated the availability of the consumer internet; by the time I joined up in 1997 that transition had happened. My initial internet connection when I started here saw me upgrade from a dialup connection to a local ISP over a 56Kb/s modem to a pair of bonded ISDN lines that gave me a screaming 128 Kb/s connection to the A&L servers and my co-workers.
As the consumer internet exploded, our project mix followed the same transition that the rest of the world did — we went from 100% of our projects being standalone desktop applications running natively on Windows or macOS, to a few web projects (including a ton of embedded web projects — in 2017 you expect to be able to configure networked hardware by pointing a web browser at it, but in the late 90s that was the New Frontier) to our current mix that’s largely web and mobile with some interesting desktop apps tagging along as well.
We, along with every one of the clients that we’ve developed projects for in that period, have depended on strong net neutrality to enable our innovation — once your service is on the internet, your traffic is on the same footing as everyone else’s. As Gertrude might say today, a bit is a bit is a bit.
Recently, the FCC has proposed rule changes that have the potential to turn this all upside down — here’s a bit of background from meta.stackeschange.com :
Back in 2014, the United States Federal Communication Commission, in response to numerous complaints and concerns, implemented a set of rules that prohibit Internet Service Providers from blocking specific content providers or charging them for access to their networks. Essentially, a set of rules that prevent an ISP from double-dipping on service they’re already being paid for, or blocking access to specific websites just for the hell of it.
In order to do this, they had to change how ISPs were classified, moving them from a “Title I” classification to “Title II” – more or less the same framework for regulation that’s been in place for phone companies for decades, establishing them as a so-called “common carrier” – that is to say, one which may not discriminate between customers. If you already assumed that this is how the Internet worked, you’re not alone; however, due to how they were classified previously the FCC had been unable to enforce rules that would ensure that traffic over the Internet would continue be allowed to work as, well, traffic over the Internet was expected to work.
(also scroll down for the answers on the rest of that page for more discussion and links on the topic than you probably have time for today).
The group Fight for the Future has declared today, 12 July 2017, as a day of action, for “regular friendly Internet users like you to submit your comments and concerns to the FCC about their plans to do away with net neutrality.”
If you’re in the US and would like to participate, you can:
At WWDC earlier this month Apple previewed ARKit – it’s initial foray into Augmented Reality or AR. Alongside the intro session at WWDC they published Understanding Augmented Reality which provides a nice overview of how ARKit works, best practices, and its limitations.
Following WWDC the development community has put together a number of great demos that highlight the possibilities and potential of ARKit and the Made with ARKit (@madewithARKit) site has been chronicling some of the best of these.
In my last post I took a closer look at how the Apollo iOS GraphQL client executes queries and what the resulting JSON looks like. In this post I’m going to focus on how the JSON is parsed and converted to the native Swift types generated by the apollo-codegen tool and also look at how the Apollo iOS client caches results. (more…)
In my last post I took a look at using the Apollo iOS GraphQL client framework to access a GraphQL backend running on the Graphcool GraphQL mBaaS. Shortly afterwards Brandur Leach, an API engineer at Stripe posted “Is GraphQL the Next Frontier for Web APIs?“. In his post Brandur gives a good overview of the current API development space, compares GraphQL to other technologies, and ultimately puts his support behind GraphQL. The follow-on discussion on Hacker News is a bit mixed, with some comments in support of GraphQL along with a few dismissing it. Some advocate support for both REST-like and GraphQL APIs, given that with a sensibly designed backend, support for both is possible with too much additional work. Stripe has a popular REST API that is used by a lot of developers, given Brandur’s opinions, it will be interesting to see if they take this hybrid approach and start offering a GraphQL interface as well.
Regardless of whether GraphQL will gain more traction compared to other approaches or not, I wanted to dive a bit deeper into the client side of things and get a better understanding of how the Apollo iOS framework and apollo-codegen tool work. (more…)