I had the pleasure of attending Compute Midwest 2016 today. There was a fantastic speaker line up with a few Kansas City natives.

Bibop Gresta  – COO of Hyperloop Transportation Technologies 

Bibop talked about their work on the Hyperloop. The most striking thing about HTT to me is not the technology but the way they’re creating and innovating on the product itself. Instead of a typical startup with employees, the vast majority of the work is done by people remotely from around the world in exchange for stock options per hour they work. This takes remote work and global operations to an extreme. 

A few more interesting notes from his talk:

  • HTT’s plan is cheaper than existing infrastructure (trains / subways) and self sustaining because it is energy net positive.
  • They’re using a polymer that he called Vibranium in all seriousness
  • HTT plans on using virtual windows with IPS and motion tracking facial software to move the perspective in the window.

 Bob Metcalfe – Inventor of Ethernet and Cofounder of 3Com

Bob talked about the history of ethernet and networking at large. He was a fantastic and engaging speaker. With his 40+ years of experience in the field he shared insights on building products and dealing with competition and snuck in a jab at the most recent IoT attack.

  • The idea for radio (wifi) was around before the first spec for ethernet made it into IEEE, the technology was just lacking so they pivoted until the time was right.
  • He stressed the freedom of choice among competing alternative products
  • He stressed that it’s not ok to be better than a competitor simply because you’re incompatible. This was a large problem that they fought when ethernet was catching on.
  • Humans are more and more strongly connected and companies who leverage this succeed.
  • Competition hardens us against the status quo.

Kaitlyn Thaney – Director of Programs at the Mozilla Foundation

Kaitlyn talked primarly about how Mozilla is larger than just Firefox, and went into more detail about the Mozilla Foundation. She went into greater detail about the history of Mozilla going back to the Netscape + AOL merger and how Mozilla split of from AOL shortly after. 

  • We should iterate until we reach intuition. This is particularly true for web and user interfaces.
  • Web literacy takes more than just developers
  • Hive KC
  • Mozilla Foundation protects and builds the open internet, but open source is still taboo. 

Adam Leibsohn – COO of Giphy

Adam was born and raised in Kansas City. His entire presentation was, as you’d expect, 250 slides of gifs. He started it off by stating loud and clear that it’s pronounced with a hard “g” – real issues first. In a nutshell, the vision for Giphy is to be the search engine of the messenger generation. Text is “outdated” as an input method and things are trending towards images, facial recognition etc.

  • Words are clumsy. They’re excellent at literals, bad at abstracts.
  • The printing press democratized storytelling, photography democratized moment sharing and the gif should democratize emotion sharing.
  • Stories are information disguised as entertainment.
  • Past and present storytelling is linear, but the web isn’t. 
  • Content needs to be translated for the web, not simply transposed.
  • They’re starting to to sentiment analysis on gifs.

Jordan Evans – Engineer at NASA JPL

Jordan shared plans and NASA’s vision for the next Mars rover in 2020. The goals for the next mission are:

  1. Determine whether life ever existed on Mars.
  2. Characterize the climate.
  3. Characterize the geology.
  4. Prepare for human exploration.

The overall discussion was very technical and he reviewed various parts of the next rover. He also mentioned that he considers Europa to be a much more scientifically planetary body to explore, mostly due to the liquid water.

Danny Cabrera – CEO of Biobots

Danny discussed a few of the challenges around 3D printing organic tissue and things that they’re currently overcoming. As he sees it there are two major reasons why biology is difficult:

  1. Biology is done (and always has been) mostly by hand and is rarely automated.
  2. Biology today is done in 2D petri dishes

Biobots aims to fix both of those issues by moving to 3D models. While printing organs isn’t on their immediate horizon, they seem to anticipate printing for the purpose of testing solutions and medicines moreso than transplanting. The idea is that pharmaceuticals and organs and remedies should be hyper localized to the patient rather than a cure all for the masses like chemo today.

Davyeon Ross – COO of ShotTracker

Davyeon showed the technologies involved in creating ShotTracker. They’re primary goal is democratizing (keyword at the conference) analytics in sports. Starting in basketball but plans to move into other sports.

  • You can’t improve what you don’t measure.
  • Pivoted from pure software to a hardware company
  • Hardware is really hard to do
  • Started purely consumer and moved to team based marketing

Alex Menzies – AR / VR Innovator with NASA

Alex talked about and demoed software that they had created at NASA for creating renderings of Mars for use in the Hololens so that geologists could actively participate in research on Mars from Earth. The software takes all photos from the live stream of the Curiosity and other orbiters and generates terrain from them. One of the more interesting parts of his presentation was a note in passing about choosing the best images programmatically because of the disparity in pixel density even at a distance between the rover and orbitals.

  • When the first people walk on Mars they will be accompanied by thousands of people on Earth, all living the experience through augmented / virtual reality from all the cameras and sensors on the planet “Telenauts”
  • In technology we go from astounding to normal very quickly.
  • The Hololens has been very instrumental in practicing installing radioactive materials on the new rover.

Augmented reality, big data and security were understandably the big topics at the conference. Several people mentioned the security breach in IoT recently. Overall it was a fantastic experience with a lot of interesting speakers.

We’ve been doing quite a few server migrations recently and ran into a few peculiarities with changing nameservers on Namecheap. First and foremost:

No trailing periods (.)

AWS Route 53 and other hosting providers will give you NS records with a trailing period. If you use a trailing period in a nameserver record for Namecheap you’ll get a very generic error that doesn’t tell you what’s wrong.

A few other things:

  1. The name of the name server should be provided without spaces in the beginning as well as in the middle of the name servers hostname (e.g. as “ns1. nameserver.com” may result in error, while “ns1.nameserver.com” will be accepted).
  2. The name server should be provided without the trailing period in the end (e.g. as “ns1.nameserver.com” but not “ns1.nameserver.com.”).
  3. Name servers should be provided in the fields without their IP addresses.
  4. Name servers should be properly registered, i.e. it is possible to point a domain only to an existing name server.

Sendgrid Labs has a great tool for stress testing sites and applications called Loader.io.

To get started all you need to do is upload a small text file with a unique key from Loader.io into the root of your site to verify ownership, then start running your tests. In the free version you can hit the application or site with up to 10,000 clients per for a minute at a time.

The free version shouldn’t be used as the end-all test for stress testing an enterprise level site, but it’s a convenient, easy to use solution for finding bottlenecks and glaring issues.

Co-authored by Preston Chandler

When used appropriately, “No” can be our most powerful tool. Truly powerful “no” users employ it in the following ways:

No, but…

… we could always try ‘x.’

We don’t need to be a yes-(wo)men or flip-flop on our opinions but we do need to give reasonable alternatives to the desired course of action. Simply saying ‘no’ without any follow up leaves the decision completely one sided and prevents actual collaboration.

By providing an alternative we’re showing that we do care about the problem at hand and are actively looking to come to a compromise.

I don’t know…

… but I’ll find out for you!

Saying that we don’t know the answer to something can show humility. The only thing we know for sure is that we don’t know everything. However, we are very willing to learn.

No, because…

… x, y, z.

Talking others through our thought process will help them to understand where we are coming from. They still may not agree with us, but they will better understand us. The ability to back up our opinions with facts, experience and a logical position is critical when speaking with someone who expects us to be experts in our given field and instills trust in our knowledge.

Not right now…

… there are higher priorities

Many times a specific course of action isn’t incorrect, it is just not timely. Taking that same course of action at a later date may prove beneficial.

In addition to the specific language used, the following “Nos” will also help:

  • No Nonsense – Don’t be wishy-washy. Own the “No”. this is a great way for you to emphasize your role as an expert.
  • No Pride – Strong Opinions, Loosely Held. Recognize quickly when you are incorrect. It isn’t too late to change the “no” to a “yes”

This post on Angular 2 was co-authored by Landon Cline

It’s no secret that the JavaScript world is moving incredibly quickly to the point of exhausting anyone actively involved in developing products using a remotely ‘modern’ JS style.

With your JavaScript fatigue on the backburner, let’s talk Angular 2.

Why the fuss over a version change (Angular 1 > Angular 2)?

So much has changed in Angular 2 that it is important to think of it as an entirely new framework and leave what you know about Angular 1 at the door. While Angular 1 is still an awesome framework one of the core things Angular 2 tries to accomplish is to be more performant and offer guidance on a recommended architecture. Although it has not been released yet the Angular 2 CLI tool will help with scaffolding, generators and built in testing suites (Angular unit testing…not just a myth anymore).

For a reference guide on some of the syntax changes for Angular 1 vs Angular 2 checkout this article.

Major Decorators

JavaScript Decorators are an ES2016 construct, not specific to AngularJS. Angular 2 has several major decorators that they employ.

In Angular 2 Decorators typically declare classes that provide metadata about the component.

@Component({ … });

@Directive({ … });

@Pipe({ … });

@Injectable({ … });



@RouteConfig([ … ]);


The Input() decorator declares a property binding usually sending data into a component.

// Your component JavaScript
    selector: 'example-component'

export class ExampleComponent {
    @Input() title: string;
// Your component HTML
<example-component [title]="'Some title'"</example-component>


The Output() decorator declares an output property that can be subscribed to using an Event Emitter (which implements RxJS).

// JavaScript component emitting an event
export class ExampleComponent {
    @Output() foo: EventEmitter = new EventEmitter();
    sendEvent() {


A Pipe takes in data as an input and transforms it in some way. String modification, for example.

There are several built in pipes including:

  • Date
  • Decimal
  • Percent
  • Uppercase / Lowercase


// Custom pipe
    name : ‘customPipe’
export class customPipe {
        return value.toString().replace();
    pipes: [customPipe]
// Pipe in use in a component
<p>{{foo | customPipe}}</p>


The Angular Router navigates users from view to view and allows passing custom data to individual routes.

Angular 2 routing is managed through the @RouteConfig() decorator. You do have the option of using either an HTML5 Location Strategy using the history pushState or the older HashLocationStrategy “hash bang” method.

        path: '/',
        name: 'Home',
        component: HomeComponent,
        useAsDefault: true,
        data: {
            hideSideNav: true

Dependency injection

Angular has it’s own dependency injection (DI) framework. You use the @Injectable(); decorator to mark a method as something that can be injected. After that you need to include it as a provider and in the constructor of the component you want to use it in.

If you want to use an injected component in the entire application it should be added in your parent component but not bootstrapped into the providers

// Your class where you declare the injectable component.
export class ExampleComponent { … }
// The component you are injecting into.
    selector: ‘my-app’,
    template: ``
    providers: [

export class AppComponent {
    constructor(_exampleComponent: ExampleComponent) { … }


Transclusion in Angular 1 is now referred to as Projection. This allows a way for a parent component to insert markup into a child component. The <ng-content> tag is used to accomplish this and helps to keep your nested components flat.

    selector: 'my-thing',
    template: `left  right`,
class MyThing {}
    selector: 'my-app',
    template: `INSERTED`,
    directives: [MyThing],
class MyApp {}

Next Steps

There are a ton of resources out there for learning more about Angular 2. A few that we found very useful for our most recent project are:

Occasionally you’ll run into a problem where you have two or more ranges that you need to equalize. This is probably most common when charting or using graphs but also comes in to play when using things like HTML5 range sliders where you need to make a dynamic slider that has a range from 0 – 12 equalize to 0 – 100.

Taking the range slider example, let’s say we’re trying to capture how many hours a day something is in use. That means we would likely have a minimum value of 0 and a maximum value of 24.

The Problem

If we want to position something along the slider as it updates we would need to know it’s relative position, but we can’t just do a simple percentage because 0-24 is a very different range than 0-100.

So how do we find a relative position of the current value of the slider?

Enter Linear Equations (Point-Slope)

We’re going to be using the Point-Slope Form to find our relative location.

y - y1 = m(x - x1)

Where m is the slope of the line and


is any point on the line.

The first thing we need to do is solve for m because we already know x and y (our range values values).

To solve for m we would convert the Point-Slope form to the following:

m = y - y1 / (x - x1)

After that we need to multiply the slope by the current value.

Pseudo Code

This is a reduced example assuming you’re equalizing to a percentage.

var findSlope = function(min, max) {

    var x = [min, max];
    var y = [0, 1];
    return (y[1] - y[0]) / (x[1] - x[0]);

var findRelativePosition = function(val) {

    var slope = findSlope(0, 24);
    var min = 0;
    var max = 24;
    // We need to use (val - min) in the event our minimum value isn't 0. For example, if we want to use 12-24 as our hourly range.
    return slope * (val - min) * 100;

findRelativePosition(12); // 50
findRelativePosition(13); // 54.16
findRelativePosition(24); // 100

Animation style, acceleration, deceleration and motion are all very important things to consider when designing a user interface. Luckily most of these can be implemented using CSS transitions.

One of the easiest ways I’ve found to edit own transition and animation styles outside of the few keywords that have good browser support such as linear, ease-in, ease-out and ease-in-out is using Chrome DevTools cubic-bezier designer.

If we inspect an element in Chrome and apply a transition to an element we should see a small icon next to the type of transition.


If we click on the icon it will open an editor showing the current transition style.

Editor Open

From here we can select the handles on the line and adjust them and watch an example of the transition before it is applied. This value can then be copied out into our CSS!


If you’d like to learn more about motion and animation in UX design, check out Google’s Material Design Guidelines on motion.

Google Chrome will shortly begin shaming all sites that don’t have an SSL installed by showing the red ‘X’ in the URL bar.

Previously the ‘X’ was only shown if the SSL was invalid or insecure but the new push to encrypt everything is leading to showing the ‘X’ even if there isn’t an SSL installed at all.

You can read more about the change on the official Google proposal.

Testing your local environment against Xcode and Browserstack is great, but they’re still just emulators. If you need to test against an actual mobile device it’s useful to load your current local environment on a mobile device and do your testing immediately before you push to a remote server.

The following instructions are pretty specific since most Front End work here is done on a Mac but I imagine the instructions would be similar for a Windows machine.

For the record there are third party plugins that you can do this with as well, but if you don’t already have those installed as part of your workflow that shouldn’t stop you.

Mac to iPhone:

  1. Verify that your server is running locally, for example, http://localhost:3000
  2. Connect to the same network on both devices.
  3. Find your computers name by either looking in your sharing settings or opening up your terminal. Ex: acarlson-mac
  4. In a browser app on your phone go to {{your-computer-name}}.local:{{port}}

Mac to Android:

  1. Verify that your server is running locally, for example, http://localhost:3000
  2. Connect to the same network on both devices.
  3. Find your computers internal IP address by either looking in your sharing settings or using ifconfig in your terminal.
  4. In a browser app on your phone go to {{your-internal-ip}}:{{port}}

  1. SSH into your server
  2. Install zip and unzip.
    1. CentOS / Fedora / Red Hat: yum install zip and yum install unzip
    2. Debian / Ubuntu: apt-get install zip and apt-get install unzip
  3. Zip a file: zip new-zip-file.zip file-to-zip
  4. Zip all files in directory: zip new-zip-file.zip *
  5. Zip a directory: zip -r new-zip-file.zip directory-to-zip
  6. Unzip to the current directory: unzip file-to-unzip.zip
  7. Unzip to a specified directory: unzip file-to-unzip.zip -d /directory-to-unzip-to
  8. List all files in the zip: unzip -l file-to-unzip.zip

Continuing on with my Adventures with AWS, I needed to point a subdomain to Amazon Web Services, while keeping my primary domain and it’s DNS handled at another host.

I would consider this is a relatively common need, you have your primary domain hosted somewhere that is handling your DNS for you, but you want to host certain parts somewhere else. This gives you flexibility to add new hardware, utilize cheaper / free hosting, and a whole host of other benefits based on your situation.

For the purposes of this example, the subdomain we’re going to point is aws.

It may seem complicated but we can do it in just a few steps as long as you have an Elastic IP already configured and added to your EC2 instance. If you don’t, check out my tutorial on setting up your Elastic IP’s first.

Pointing a Subdomain to AWS

  1. Log into your hosting account that handles your DNS for the domain you want to point. This may be where you registered the name, or it may be your current hosting provider.
  2. Open up your DNS zone and add an A record for the new subdomain. It should look something like: aws.yourprimarydomain.com. 43200 IN A {{ELASTIC IP}}
  3. If your hosting provider gives you a user interface to change your DNS settings, you’ll want to add a new record, or row with the name as aws (or your subdomain), the type as A and the data or value as the Elastic IP you have pointing to your EC2 Instance.

This can take some time to propagate, up to 72 hours, but if it’s a new subdomain it’s likely to propagate much quicker.


If your AWS EC2 Instance is unresponsive from the browser and you aren’t able to SSH in from your terminal, you may need to restart it.

This can happen for a few different reasons, one of which (the one I ran into) being memory management. I was running both Bitbucket and Jira on an EC2 Instance that was just too small, and became unresponsive multiple times.

Sometimes a ‘reboot’ won’t do it and you actually need to stop and start the instance. The difference is that a reboot doesn’t actually clear out your memory and will keep anything that you’ve been doing in memory.

Restarting Your Amazon EC2 Instance

Assuming you have an AWS Instance already created:

  1. Log into your AWS EC2 Management Console.
  2. In the left hand menu select ‘Instances.’
  3. Right click on the Instance you want to restart and hover over ‘Instance State’ and choose either ‘Stop’ or ‘Reboot.’ In my case I needed to actually stop the server and then start it again. Try a reboot first, just in case, but if that doesn’t work then try stopping the server altogether.
  4. Once it completely stops, right click and choose ‘Start.’ then try to log in again and continue debugging why it crashed in the first place.

An Amazon Web Services Elastic IP is essentially an IP address that is tied to your AWS account. It’s not specifically tied to any given EC2 instance and can be used for multiple instances.

When you launch your first EC2 instance you’re immediately assigned a Public IP address. This IP address is not permanent and if you restart your instance for any reason, a memory leak, hardware failure or anything else, that IP address will be dropped from your account and you’ll be assigned a new one.

This won’t do you much good if you want your site to be consistently accessible from the same address. That’s where the Elastic IP’s come in.

An Elastic IP is a permanent IP address that is tied to your AWS account. That means that you can start and stop as many EC2 instances as you want, that IP address will remain constant.

One thing to note, if you have an Elastic IP address in your account and you do not have it assigned to an EC2 instance, you will be charged for it. If you are using the IP you will not be charged.

Setting up an Elastic IP

Assuming you have an AWS Instance already created:

  1. Log into your AWS EC2 Management Console.
  2. In the left hand menu under Network & Security select ‘Elastic IPs.’
  3. Select ‘Allocate New IP Address’ and follow the prompt.
  4. Once you’ve allocated an IP address, in the left hand menu of the Management Console navigate to ‘Instances.’
  5. Right click on the instance you’d like to edit and select ‘Networking > Associate Elastic IP Address’ and choose the IP address you just created.

Congratulations! If all went well your instance will now be accessible from the static IP address. This will not change when the server reboots so you’ll be able to log in and manage your server much more reliably.

Hosting in 2016 is no small matter. There are a ton of great hosting providers out there, each with their own unique spin. The classics like Hostgator, Mediatemple, Namecheap etc are still around and doing great work but there are relatively new players in the field too like DigitalOcean, who just bill by the hour for a scalable VPS.

Billing by the hour for a VPS is by no means a new concept, AWS has been doing it for years but I’ve always been intimidated by the thought of not having a GUI like WHM or cPanel to manage them.

Until last week, that is. My VPS started running into memory issues because I had a LAMP stack running alongside my Jira and Stash installations. Jira and Stash are notorious for being memory hogs and I needed a solution fast to prevent any more issues.

My first thought was to spin up a droplet on Digital Ocean but a friend suggested I at least take a look at AWS. I started looking at the pricing and knew I’d need at least a 2GB VPS. On AWS I can get a 2GB virtual server for ~$13/mo. On DigitalOcean it would be $20/mo so I decided to give it a try. Because it’s billed by the hour I didn’t have much to lose.

Turns out, it’s way easier than I thought it would be. You do need to get a little comfortable with the terminal but most developers have some experience with Node, Gulp, Grunt or other command line tools at this point.

Getting Started with AWS

  1. Log in to or register for AWS.
  2. Go to the EC2 Management Console and click on ‘Launch Instance.’
  3. Select the machine image that you’d like to start with. This means you can launch a Windows or Linux (Red Hat, Generic Linux, SUSE or Ubuntu) server. It’s basically your starting point for your server. For my purposes I selected an Ubuntu 64-bit server.
  4. Select the server family that you’d like to launch. This is what determines the size and configuration of your server.
  5. At this point you can either click Review and Launch to get started with your server, or you can configure additional options such as adding storage and determining which ports you want to be accessible.

All in all it took me less than 5 minutes to launch and SSH into my new Amazon instance.

AWS might not be for everyone, you do need to do a little more configuration and you don’t have quite as much support as some other hosting providers but you do have immediate access to the whole host of other services that AWS offers such as load balancing, S3 storage and so much more.

So far, I’m really enjoying my experience with AWS, I think it’s important to stretch ourselves professionally, which for me, meant digging deeper into server admin tasks and I managed to save some money in the process!

In development, client focused development in particular, it’s easy to get discouraged by requirements.

  • You must use a certain CMS.
  • You must fit everything within an outdated framework.
  • You must use an outdated version of jQuery.
  • You must support unreasonable legacy browsers (IE6 I’m looking at you)
  • You must run every change through x levels of client approval.

The discouragement doesn’t come from the requirements themselves, but the fact that they prevent you from producing the quality work that you would be proud of.

There’s nothing wrong with this discouragement, but don’t let it make you resentful. Those requirements are forcing you to learn and think about solutions in new ways.

Often we look at a set of requirements and then dread working until the project is over. Whether we like it or not, that’s reflected in the quality of work produced. Even if you aren’t able to write the code you want for a project, it should be the best code you can write given the situation.

Write the best you can with what you’ve been given.

Apple Music was released today and with it comes a monthly subscription. A single membership is $9.99 and a family membership is $14.99.

Want to use the 3 month free trial but don’t want to be billed when it runs out? It’s easy to turn off the auto-renew.

After you’ve signed up, do the following (jump down if you just want screenshots):

  1. Open the new Music app
  2. In the upper left hand corner tap your Account
  3. At the bottom of the Account Settings, tap ‘View Apple ID’
  4. Towards the bottom of the Apple ID Settings, tap ‘Manage’ under ‘Subscriptions’
  5. On this settings screen you can adjust the renewal process (auto or manual) and whether you want to change the account type.

Turn off Apple Music Auto-renew




apple music

When working with maps and JavaScript, if you need to outline or shade states or counties on a map you need a huge list of coordinates that make up that shape. Sometimes it’s a single polygon but sometimes, like in the case of Alaska, it’s a MultiPolygon.

I’ve worked on more maps in the last couple months than I have in the last few years but haven’t ever been able to find all of the counties of the US in geoJSON/JSON format, only in KML or CSV.

This weekend I wrote a parser that takes the CSV’s that hold the data and creates individual files for each state as well as a master file for the whole US.


All US State Boundaries

All US State County Boundaries

US County Boundaries State by State

5 years ago I bought my first smartphone. It was a Motorola Droid 2 and it was awesome.

Having only used feature phones up until that point, when I bought my Droid 2 I discovered a new world where anything and everything I could imagine was always at my fingertips. I instantly had a GPS, the Internet, an MP3 player, a video game console, a high quality digital camera and more everywhere I went.

It was at that moment that an Android fanboy was born. I went the whole 9 yards. I started learning Android development, I rooted my phone (obviously), I customized it to no end and I tried new ROM’s and early release updates on a weekly basis.

But with this newfound love for Android came an equally strong disdain for all other phones. I didn’t care that the iPhone came first and was one of the original innovations in the field. It didn’t matter, Android was king.

I had what I thought to be legitimate reasons for believing this. Most of my problems with iOS were related to storage and customization. I really enjoyed having a phone that I could connect to a computer as external storage. I liked being able to customize the majority of the phone without even rooting and I loved the fact that it was Open Source.

So 5 years passed and I was your classic Android evangelist. To anyone who asked (and would listen), I would explain why Android was superior and list my grievances against the iPhone. But guess what?

I don’t care anymore.

No seriously. I own an iPhone 6 and I love it. It’s incredibly simple to use, it’s intuitive, it just feels right. But then again, Android has a lot of those similar qualities and more.

I am now firmly of the opinion that Android, Windows Phone, the iPhone, Blackberries etc all have their place. One reason there are so many different types of phones is that no one’s needs are identical.

Right now an iPhone fits my needs perfectly. From 5 years ago to this point it was Android. Who knows, maybe in another 5 years a Blackberry will be just what I’m looking for. The point is that there isn’t, and probably shouldn’t be, a one size fits all solution. Pick the phone that works best for what you need it for, and enjoy. Looking back, my staunch love for Android and nothing else was misguided and wasn’t actually doing anyone any good.

The only reason we should care about what type of phone a person is using is from a development and designing standpoint. We need to test and optimize for all phones, tablets, computers, watches, glasses and whatever else our users are viewing our apps or websites on. It’s not easy, in fact it’s damn hard, but it’s absolutely worth it.