Idle Time

As a thought experiment, imagine you were restricted to use a single device to interact with the whole digital ether. Exclusively one: laptop, tablet, smartphone, e-reader, you name it, but for the rest of your days, this is the one and only device you would ever be "touching". The good news is that you get to choose, of course.

I have presented this riddle to many friends and, invariably, their answer is always the same: smartphone. To their surprise, my answer is — and has always been: old fashioned PC, precisely, macOS[1].

My answer to this question has its raison d'être, since most of my waking hours go by in front of a computer. Not just at work, but even during leisure time. The computer is the tool I use to funnel my creativity: from writing, coding, designing, reading, communicating… but also my go to device for absolutely everything.

I do not dislike smartphones, I understand you can get plenty of stuff done with them, but for me they never "clicked" as creation tools. My iPhone home screen is almost "factory settings" and the main uses it gets are limited to podcasts, audiobooks, music, messaging and hailing a cab from time to time.

The computer is the machine I grew up with, the one I discovered a whole new world throughout: from DOS to the dawn of the Internet, I learned to love its design, appreciate its craft, but ultimately, I became fascinated by how it worked.

For these reasons, I have always been drawn to the keyboard as an input device, hence keyboard shortcuts are my thing. Although I don’t use them as much as I love them, still today, every time I invoke one, something feels right deep inside.

My approach to remember, use and learn new keyboard shortcuts has always been the same. I do not employ heavy machinery such as Text Expander or Keyboard Maestro. I just try to be aware and spot routines I repeatedly perform with the mouse, until the inevitable thought of "I’m sure there’s a shortcut for that…" pops up. Then research for the shortcut and meticulously log it in a "Shortcuts in Use" note that has been in the works forever.

Believe, after this 500 words introduction detour, the story eventually lands somewhere. As a matter of fact, this post was inadvertently and without permission seeded in my mind a few months ago, when this product escalated to the very top of the Product Hunt ranks. It caught my attention immediately because somebody just "productized" the list I had been curating for years, I loved it.

After a more than deserved "upvote", the product itself inspired a broader examination on how we interact with our devices and the impact they potentially have in our minds.

Before we dive in, please, keep in mind that these lines are not grounded in academic research, there’s plenty of studies available documenting this phenomenon. But rather my personal journey and a humble observation on how to ensure our time is well spent in front of our devices.

My working assumption revolved around the idea of how much of the time spent in front of a computer had actually become idle time. Non-productive time, without a clear task or particular goal to achieve, but rather wandering around, just being with the computer, or well, procrastinating.

For a curious, monkey mind, sitting down in front of a computer with no predefined task to accomplish, will inevitably become a recipe for failure. In my particular experience idle time meant playing around with some app settings, re-reading an article or rethinking the way my filing system worked. Unremarkable activities that directly translated to anxiety.

While in idle mode your mind runs fast, it operates in autopilot, but it is going nowhere. This is a nasty loop, because it feels effortless and comfortable being in idle land, but at the same time you are also aware you shouldn’t be there in the first place.

Well, at this point you might be wondering what keyboard shortcuts have to do with idle time and if there’s even a connection between the two. A few months ago, I was wondering exactly the same. It turns out they have a lot to do with one another and, indeed, such connection exists.

The trick that ties everything together is one of the simplest, silliest things I’ve done lately, that has had a major impact in my daily life: setting aside the mouse to the left of the keyboard.

Its immediate consequence: using the computer now required deliberate effort.

Thoughtless, fast paced, muscle memory mechanics were not available anymore. Therefore wandering was not an option because the ease in which I used to navigate the computer was completely gone. Every single time I was about to fall back to idle mode, I encountered the inconvenience of a lefty mouse, not easy, then the itch immediately vanished.

The computer wasn’t effortless as it used to be, I just unlearned — to put it in Star Wars terms, thus our relationship was changed, forever, for the best. Now every time I sit (well, stand) in front of the computer I have a clear goal in mind. It has earned back its purest creation soul.

It is a funny feeling, though. Everything I could "do" now ties back to my keyboard expertise — or, well, if I want something badly, I know a fall back to the uncomfortable mouse experience is still an option. This has inevitably expanded my shortcuts portfolio in ways I could have never imagined, to the point that I stopped logging the new ones onto my note — and where Shortcuts.design came in handy :)

Finally, and most important, the anxiety associated with the idle time has completely gone away and I find myself in "flow" state more often than ever before.

That was quite a long story, so, to wrap everything up, I want to go full circle, back to Michel van Heest, the man behind Shortcuts.design. This week I randomly came across his Medium post about the "behind the scenes" story of his product, which I not only clapped the shit out of it, but also reminded me how much my life has improved just by embracing the keyboard.


[1] That’s a subtle distinction though, because the question presumes platforms, which is not entirely fair. I’d rather use white labeled hardware running macOS, than a MacBook running Windows or some Linux distribution. But that’s beyond the point and food for an entirely separate conversation, that won’t be happening in these lines either.

Ready Player One

Last month I went to the movies. This is kind of an extraordinary event by itself, because I never go. Turns out a wise friend extended the recommendation, told me the movie was worth it, and as I always do, I trusted his advice.

I knew nothing about the movie until I found myself staring at the backlit billboard, next to the cinema front door. It read "Ready Player One", never heard about it, but truth be told, the artwork looked really cool — something I could enjoy, I thought.

Long story short, I loved it, good thing I trusted my friend, as I always do.

I’m not here to talk about how much I liked the movie, though, but the follow up conversations I’ve been having with several people about its underlying message. The last one, just a few minutes ago, and actually the spark that led me to start typing these lines in the first place.

The widely shared opinion I keep hearing about is that the movie belongs to the science fiction realm, that it is a good thought experiment for something that will (most certainly) never happen — or it will far, far, far away in the future. In other words: "it is just a movie".

I felt kind of lonely on the other side of this argument, because I firmly disagree.

I’d argue we are already living in the early stages of this "dystopian" reality. Nowadays, as a society we are building (with astonishing amount of success) parallel, digital worlds — say games or social networks, for people to lose immerse themselves into. The results of this massive societal experiment speak for themselves, anywhere you go or look.

Despite our current means to recreate these experiences are still flat, bi-dimensional, backlit matrixes of colorful dots (aka. screens), just pay close attention at how these devices, still far from rendering an accurate picture of reality, are already captivating human attention.

Look no further: during my commute, despite the multitudes, I can still count the amount of people I see with their heads up. Although if you want to experience this phenomenon in its full extend, watch out for younger generations, the truly digital natives, the ones that grew up with an iPad attached to their fingertips: they live in there.

Don’t get me wrong, I am in love and deeply fascinated with technology, but our current gadgets pale next to the ideas envisioned at RPO. Our state of the art smartphones are still far away from being fully immersive experiences.

Precisely for this reason I am convinced this future is inevitable. If given the current state of our technological development, we are already attached in ways we could never foresee fifteen years ago, imagine how it will all look like with mainstream availability of devices capable of recreating RPO.

I don’t think this last point can be even argued, but this is just one side of the problem. The true, underlying narrative I see around this whole "parallel world" debate is not technological, but societal.

The question we should be asking is not "if" we’ll eventually be capable of developing a digital world virtually indistinguishable from ours. Not even "when". This is inevitable and we all should acknowledge it is going to happen, sooner or later, period.

The question we should be asking, the one this movie is subtlety exposing, is at which point it will become cheaper to recreate an entire new (digital) world rather than fixing the one we live in.

Just food for thought 🤔

Udacity Data Analyst Nanodegree

udacity data science nanodegree

Last January I proudly finished the Udacity Data Analyst Nanodegree (DAND) and this is my attempt — I hope in 1.000 words or less, to publish the kind of post I wish I’d read back before I enrolled: relate why I did it, who is it for and, of course, how the experience was like.

Why I Did It?

Despite Udacity’s Nanodegree programs certainly claim to employ their students[1] in the most cutting-edge, on-demand jobs, the main reason I joined the program was to level up my (data) game in my current job as a Product Manager at Ironhack, not to start a new career as a data analyst.

More often than not, I found myself dealing with situations involving data flows I didn’t fully comprehend. The idea of making decisions without a solid data driven foundation backing it up made me sometimes feel uncomfortable about the path I was leading my team towards. Product meeting after meeting, I had this nagging thought of knowing that there was something missing all the time, that we were not getting the whole picture because of our data ignorance, but still, couldn’t see it.

But let me be crystal clear here before we move on: by "data" I’m not referring to the "big data" everybody is talking about as if it was teenager sex. Believe me, very few people deal with truly "big" data. The DAND is also not about "big data", but neither "big data" was what I was looking for. On the contrary, I wanted to address rather smaller things: statistically inclined issues, biases or widely opinionated meetings that were clouding our decisions and ultimately setting the stage up for an HiPPO driven environment.

After an unreasonable amount of research — lets save this for another post, and factoring in my time constraints, a random Wednesday of April I decided to enroll. It was my first attempt to commit to an online program this big, and I must admit, for better or worse, back then I didn’t fully understand what I was signing up for.

In a nutshell, I didn’t aim to become a fully fledged data analyst — despite Udacity claimed you could if that was your goal. I just wanted to bring the data skills to my current job, hoping they’ll help me with these:

  • Ensure our product team was accurately using and making the most out of our data
  • Set up an environment led by healthy and meaningful metrics
  • Back and make decisions supported by data as an anchor of agreement, kind of a source of truth for our team
  • Leave behind this wild guessing mode we were living in and start doing things right :)

The Program

The whole curriculum was broken down into eight modules (seven plus introduction)[2], requiring a dedicated project delivered by the end each one. Each project came with its own submission process — which they don’t take lightly, where a Udacity reviewer inspects and grades your work until it meets the rubric’s criteria. It goes without saying that in order to graduate you must submit and get all your projects approved by the reviewers.

Despite the program structure has changed a little bit since it shifted to a term based structure, the topics it covers still remain mostly the same:

On top of that, each module and project builds on top of different technologies: R and Python for data analysis and statistics, Numpy and Pandas for data wrangling, Scikit Python library for machine learning and of course, Tableau for data visualization. And if it was not enough, certain modules also brought in additional libraries, which made the tech toolkit even more fun — and complex.

The amount of topics and technologies covered during the program is massive. You definitely walk out of the program with a solid understanding on both the fundamental concepts behind the data analysis and the tools a "real" data analyst will encounter in her daily routine.

This a great approach for the program if its ultimate goal is to put their students in a job ready position with the least amount of time. In my particular case though, I felt the program was a little bit too broad, especially judging by the number of "supporting tools" you have to learn from scratch lesson after lesson.

Let me explain: while learning this wide range of technologies (R, Python, Tableau…) is definitely an enriching experience for the mind, it also dilutes the value of the learning outcomes by changing the underlying technology all the time.

If I were to design the program around my personal outcomes, I’d have bet for a single technology, say Python, and build all the curriculum on top of that. The benefits of this approach would have been twofold. First, the students would have achieved a higher level of "code mastery" in said technology, which would have enabled them to build stuff quicker and with more ease, even after the program. Second, by not changing the underlying technology, the program would have been able to focus more on the content itself and go deeper at every stage, letting the technology fade away in the background.

Months after graduating, back to my job — and not working as a pure data analyst, I often find myself scripting some code with Python and building small helpers to automate some nasty, undesirable ground work. But I’ve to admit that I’ve never touched RStudio, Tableau or Jupyter Notebooks ever since. I’m grateful to be aware they exist, but maybe I could have leveraged that time to go even deeper with Python.

But again, that’s just a personal opinion based solely on my own experience. And don’t get me wrong, the program design is superb, but maybe I was probably expecting something the course was not intended for.

The Experience

Finally, how is it like to go through the program? I won’t lie: it is hard. Although the course structure is extremely clear, the materials are first class and all the projects really engaging, still, setting aside the time to work on your own, without social pressures of any kind, remains the most challenging endeavor, even for Udacity.

I finished the program in eight months[4], but I was not consistent with my schedule and the amount of hours per week I was investing, which I believe is the ultimate "hack" to stay on the program’s track.

The main problem I faced would go like this: the amount of effort it takes to re-engage again with the course is (exponentially) related to the amount of time you spend away from it. In other words, the more time you stay away from the program, the more difficult it gets to just go past the Udacity’s login screen. It becomes an ongoing battle against your willpower.

I suffered from that, big time. I remember some time around June where after over a month without completing a single lesson, the thought of dropping out even crossed my mind. I endured, but the chances of not writing this post right now were then higher than you might expect.

On the other hand I also remember periods where I literally opted out of life and did nothing but Nanodegree. I was pretty unreliable with my efforts and, as far as I can tell, getting this right is something that will totally ease your way into the program.

Besides the disconnection from the social experience, which I definitely believe is the most pressing challenge online courses must solve for, the course was really good and definitely delivered on its expectations. The materials were well crafted, the projects had a clear purpose and the support you receive from Udacity is extraordinary at each step of the way.

So, upon graduation, if you were to ask me: would you do it again? I’d say "absolutely yes" if you are looking for a career move to a data related role. The DAND is the perfect bridge to land an entry level job in-field or even as a prep stage before joining an immersive, full-time data science bootcamp.

But as a "career booster" maybe I should have explored other softer options that would have allowed me to customize a little bit more my journey. As a counter to that, I’d also argue that it is easier to see this pattern looking backwards, now that I’ve already explored the data analyst path. A hypothesis I couldn’t have articulated back when I started, because my depth of knowledge on the matter was way narrower.

Well, no matter what, beyond the program specs, overall I’m extremely happy I enrolled (and graduated) the DAND. Because it has not only helped me out at my job the way I expected and planned for from the beginning. It has, unexpectedly, also proved to be an invaluable resource for everyday life and has transformed the way I perceive, through the data lens, even the smaller situations and decisions.


[1] When I enrolled back in April most Udacity programs were paid in a monthly basis and offered a 50% money back guarantee if you graduated in less than a year. On top of that, there were two payment options, the "basic" for $199/month and the "plus" for $299/month. Only the latter offered (subject to certain fine print) "jobs guarantee" and I quote from their marketing copy: "While all of our Nanodegree programs are built with your career success in mind, you must enroll in our Nanodegree Plus program to secure a jobs guarantee. Since then, most of their programs have been gradually migrating to a term-based structure and their approach to "job assistance", that’s just an opinion, has become less aggressive and more loose.

[2] The DAND program structure was upgraded two times during my enrollment. The first one, in September, was a small tweak to the curriculum structure, which I opted in. The second, in December, was a major change where they moved the whole program to a term based structure — mainly in line with the rest of their new Nanodegrees. Udacity kindly offered me to upgrade to the new one, but I personally sticked to the old model since I was about to finish anyway.

[3] Machine learning is no longer available in the new curriculum, all the contents have been moved to its own Nanodegree program.

[4] Ideally you’re expected to finish in six months, but you got half the money back if you did it in less than twelve. Now the program has shifted to a term model though, the option to get your money back if you were to finish under a certain time frame is no longer available.

Detachment Strategy for the Apple Watch

Apple has hit roadblocks in making major changes that would connect its Watch to cellular networks and make it less dependent on the iPhone, according to people with knowledge of the matter. The company still plans to announce new watch models this fall boasting improvements to health tracking.

Every single time I run into an Apple Watch user, out of curiosity, I ask about their experience with the device. Hence I have heard plenty of valuable feedback, beautiful user stories, but also curious challenges they encounter. But without question the main complain they usually bring up — besides battery life of course, is the ability to untether the Apple Watch from the iPhone.

It is a perfectly reasonable claim though. At the end of the day, the narrative for the Apple Watch is about bringing technology closer, creating a more intimate experience without the inconvenience of having your phone in your pocket all the time.

But this narrative breaks down every single time the Apple Watch loses the "connectivity support" from its parent. Which usually happens when you need it the most: hiking, going to the beach or any activity where you would prefer "not to" bring your phone with you.

Some improvements have been made along the way with the introduction of watchOS 2 and the ability to connect the Apple Watch directly to a Wi-Fi network. But in order to get full autonomy the Watch needs to connect to a fully fledged cellular network, the same way an iPhone does. But of course, it is tricky. On one hand, data transmissions through cellular connectivity drain batteries quicker than BLE or Wi-Fi. On the other, the smaller the footprint of the device, the smaller the batteries you can fit inside. If your challenges come from both ends, it follows that from a technological standpoint, we are quite not there yet.

Regardless, there always have been rumors about Apple becoming its own cellular carrier. Which makes perfect sense, since it would allow Apple to integrate the single most important chunk of the experience they are not in control of. It would automatically translate into seamless activation of the devices, cross-country compatibility, simplification of the product line and an endless list of enhancements ultimately benefiting the customer experience.

But it remains an extremely complex endeavor. First of all, closing deals with operators that are now partners. Then scaling capacity to provide data to all devices, in every single region. Google did something similar last year with Project Fi, but the service was deployed in a more controlled environment, only for selected Nexus models. Which not only accounted for less devices, but also targeted a more early adopter type of user.

Where I want to drive this at is: what if Apple rolled out the next generation Apple Watch with a built-in, low power, world wide, cellular connectivity that helped detach the device from the iPhone. Of course I am not talking about a 4G connection here, but something more like (please, I need a leap of faith here): SigFox. The nature of this network would not be intended to watch videos on YouTube, but rather to receive an important notification or send a critical message that can't wait until you reach the phone.

Probably this would be the kind of service only Apple apps could use in the very early stages. Maybe afterwards would be accessible to third parties through a private API with highly strict rules, as it has happened in the past with the rollout of other Apple products. Moreover, the Apple Watch would be the perfect device to start with: it is already targeting pre-chasm users, more willing to support "experiments", and also operates at a smaller scale than the iPhone does.

It is not the exact same thing, but Amazon has been doing something similar for their Kindle lineup for more than ten years now with outstanding results.

I acknowledge there are plenty of flaws in the idea. But wouldn't it be a clever way to bridge the detachment gap of the Apple Watch, while laying the foundation for a world wide network to power every single Apple device in the long term?

Drivetrains and Free Time

Autonomous cars will be commonplace by 2025 and have a near monopoly by 2030, and the sweeping change they bring will eclipse every other innovation our society has experienced. They will cause unprecedented job loss and a fundamental restructuring of our economy, solve large portions of our environmental problems, prevent tens of thousands of deaths per year, save millions of hours with increased productivity, and create entire new industries that we cannot even imagine from our current vantage point.

Never heard of Zack Kanter before, but his post about self-driving cars definitely caught my attention. I do not entirely share his dramatic view of the job market fallout. I ultimately believe we are tool builders, that’s what we do. New tools always spur new opportunities and create needs that were not even imaginable before.

There are countless examples of this narrative, from profound industrial shifts, such as the transition from craft production system to the mass production system — where the figure of the Industrial Engineer and a subset of highly skilled workers erased massive portions of the craftsman workforce, to more contemporary figures such as bloggers, youtubers or an enabled distributed workforce, that simply was not even possible ten years ago.

I do agree though, that this transition will bring along huge societal changes, but further than that, Zack’s words made me reflect on how autonomous cars might reshape cities and shake some patterns that we, nowadays, consider as a given.

First of all, we have to acknowledge that this is already happening. There are several partially self-driving "ideas" in the market such as self-driving trucks fleets crossing entire countries, and this party is just getting started.

On a more emotional layer the interest in cars is dropping among younger generations. Smartphone are replacing cars as a proxy for freedom and social status, and paradoxically, they can’t used while driving. Nowadays cars embody a wide range of negative values such as pollution, accidents, congestions… even Toyota’s USA President, Jim Lentz, agrees:

We have to face the growing reality that today young people don’t seem to be as interested in cars as previous generations.

The problem though, is that most conversations I hear are revolving around self-driving cars when in fact they attempt to describe three different — and partially independent, dynamics.

  • Drivetrains moving from ICE to electric i.e. Tesla
  • Ownership moving away from individuals to fleets i.e. Uber or Lyft
  • Operations moving from human to computer-based i.e. Google’s self-driving project

Drivetrains

In an ICE-centered world, the engine is the most complicated and important component of a car. Electric motors are much simpler than their ICE counterparts. An ICE engine is built out of hundreds of components, moving parts and complex mechanical systems, think of it as a mechanical watch. On the other hand, electric motors are extremely simple, they can be easily assembled with less than ten components.

The immediate consequence of this transition is that barriers of entry for the industry are torn apart. The ICE is by far the most complex piece of a car and few manufacturers can afford the capital expenses required for its development. Moving to an EV centered world, engines get commoditized, it literally means that they can be bought from a local supplier.

In other words, the craft and expertise amassed building ICE cars is worth nothing when EV simply use batteries, computation and software to control a drivetrain.

Another interesting factor to this narrative has to do with batteries, since they will be the most critical component of an electric vehicle. Even more important — and that’s pure speculation, EVs might only be a piece of a much larger shift in energy usage and generation. Batteries, on wheels or stationary, might become a key role as the link between multiple energy generation sources.

Ownership

Owning a car is expensive, one of the largest expenses of an average family. On top of that it is a really crappy asset, since it quickly loses value, it is hard to maintain and it is not liquid in the marketplace.

On top of that, the (sad) fact that cars spend more than 95% of their lifetime parked, doing nothing, has created an entire new market of transportation as a service, led by Uber, Lyft, Cabify and an army of local operators.

If this model gains traction and becomes the default option for most people to move from A to B, it follows that cars will be owned by fleet operators, not individuals. Which in turn has even more profound implications:

  • In an individual ownership world, manufacturing and selling are bundled. But if ownership changes, car manufacturers are left in the middle, with no leverage across the value chain.
  • It is not just leverage, but also incentives. Car manufacturers are currently incentivized to optimize for driver's delight i.e. getting a ZF transmission that shifts smoother and quicker. But if the driver is not the buyer anymore, does it matter?
  • Does it mean that flying in a Boeing / Airbus or economy / business becomes much like hailing a BMW / Mercedes or UberX / Uber Black?
  • Parking is interesting as well. Cities are nowadays built around cars, at any given moment there are more cars parked than moving. Although this is a given for most of us, we can all agree that it doesn’t make much sense, since parking is wildly mismanaged and it is probably our most inefficient use of resources within urban areas. But if cars are not owned, but hailed on demand, what happens to the parking industry?
  • The issue goes deeper though, no parked cars, means less cars, since its idle time will dramatically decrease: the perfect on-demand car is one that never stops. It follows then that less cars reduce collateral industries as well i.e. $198 billion automobile insurance market, $98 billion automotive finance market, $100 billion parking industry and $300 billion automotive aftermarket will be most certainly threaten.

Operations

The most interesting piece of this triangle is the transition to self-driving or what happens when we add "autonomous cars" to the equation.

The obvious one is that driver as a job won’t be needed anymore. On one hand this is a good thing because we have to admit it, overall we are really bad drivers. The bad news it that driver represents the single largest profession, in the USA and around the world.

This inevitably ties back to the EV conversation, since automakers will find themselves with misaligned incentives when it comes to pursue self driving capabilities: how come they can compete selling a "driving experience" when there’s no driver in behind the wheel? It's just a "feature race" to a place where they don't matter anymore.

It reminds me to the PC industry in the 2000s, when PCs started being purchased by consumers, but the other way around: the purchaser stops being the driver, but a fleet, who values other things.

I could go on and on, but what fascinates me the most is the amount of combined time we will save as a whole. Millions of hours we will get back "for free" to engage in creative endeavors, leisure, family time, reading, drawing, whatever!

And of course, the most important question: what we will make of it?

Little Hacks

Our goal here isn’t to be defensive and resist our phones but to ask the question, “how can we make our home screen a livable place?” A place we can return to frequently, knowing it will respect our intentions and support our conscious use. And a place that makes room for the thoughts and concerns we want to have, and not the ones we don’t.

From time to time, I ask myself which is the single habit I can easily stick to right away, that will have the largest impact in my daily life.

For example, a few years ago I read The Miracle Morning — good book, where I came across a seemingly harmless quote, that might strike as a rather obvious statement at first, but ended up radically transforming the first hours of my days.

Well, hitting the snooze button keeps us from waking with a sense of purpose. Each time you reach for that button, you are subconsciously saying to yourself that you don’t want to rise to your life, your experiences and the day ahead.

I simply tried it: put the alarm clock across the room, in a place where I had to stand up in order to stop it. I have never hit the snooze button once again since. Now I wake up every single morning at 6am and get a ton done before 9am. In fact, I’m writing this because of it.

Little action, huge impact and upside.

Along these lines, last week I read this article and something similar happened. My home screen and notifications settings were already a reasonably quiet place, but the article made me revisit some habits and the overall relation I had with the device.

Since, I came up with a simple set of rules that have radically changed the way I interact with my iPhone.

Home screen

  • Two screens: first of all address the amount of screen real state to place apps. Having a predefined number of screens in the springboard acts as a natural constraint for the number of apps you can fit in the canvas. Moreover, I found that avoiding mindless swipes between screens reduces the time "wasted" on the device, doing nothing.
  • No folders: keep everything visible. Hiding complexity under the rug is usually not a good long term solution, but on top of that, folders are the cheating mechanism for the previous rule, so avoid them.

Apps

  • Stick to the ones you actually use: this seems like a simple rule, but spring cleaning apps can become a rather unpleasant task. The nagging thought of "maybe I need it for X...", "I used it that one time..." will never remiss. That’s the reason why I took a radical opposite approach to tackle this issue: I started with zero apps and installed them as needed. You’ll be surprised with the few apps you actually use: right now, my phone only has 16.
  • Default to defaults: require a really compelling case to install 3rd party apps that mimic Apple stock ones. This can be controversial for two reasons: first one, because we all love 3rd party apps, but second, because Apple stock apps are usually not that great. Then the question: why do it? I found that "staying default" limits "what you can do", reduces duplicity and complexity across the OS. If there’s no strong case to do it, stay default. Mail, Contacts, Calendars, Weather, Photos, Safari... are great examples of stock Apple apps that can suit most of our basic needs.
    • The exception to that rule, of course, exists. If the delta of functionality (and enjoyment) the app provides is so massive that is worth dealing with the added complexity, so be it. In my case, Bear, Spotify or Citymapper are clear examples.
    • Then of course there is functionality the OS doesn’t provide by default. In these cases, of course a compelling case is still required, but the utilitarian aspect here becomes more relevant.

Notifications

Last, but most important, notifications. The single feature that can make or break the experience of using a your phone.

The most important realization about notifications is that they clearly map out as an 80/20, even 99/1. In other words, 1% of the apps, produce 99% of the notifications. This 1% being messaging apps, where their very nature of 1-to-N communication layer turn you into a random node of the network, able to receive notifications from any node that can potentially connect with you.

The problem though, falls back again to the defaults. The moment you accept the app request for sending notifications, you’re granting any person on Earth the ability to light your phone screen up at their will. In other words, you are giving free, ubiquitous access to the backlit mechanism of your most precious and personal device.

This is simply madness.

In order to avoid this, I created three distinct states an app can inherit as notifications settings (slightly updated for iOS 10, which changed the way History and Lock Screen worked).

  • Off: these are messaging apps. Turning their notifications off by default have three combined benefits: 1/ your screen won’t be flashing every minute because of an unimportant WhatsApp group notification, 2/ your battery will last way longer, 3/ you won’t be peeking at your phone looking for the next dopamine shot, because you already know it will be none.
  • Only show in history: this should be the default mode for almost all apps, but messaging. You will silently receive notifications, but they won’t light up the screen and quietly stack in the history tab, waiting for you to go there and check them out.
  • System default: stay to the defaults (this is lighting up the screen, sounds, badges... the whole pack) only for critical notifications such as reminders, important calendar events or potential security warnings.

Following these three simple rules had transformative results that truly changed the way I interact with my iPhone. Things as easy as rearranging apps, forbidding red bubbles, avoiding colors and limiting to just two screens, literally, gave me an extra hour per day out from "wasted time" and made me be more present, relaxed and sharp.

Again, little hacks, amazing outcomes.

The future of computing

In 1971 the fastest car in the world was the Ferrari Daytona, capable of 280kph (174mph). The world’s tallest buildings were New York’s twin towers, at 415 metres (1,362 feet). In November that year Intel launched the first commercial microprocessor chip, the 4004, containing 2,300 tiny transistors, each the size of a red blood cell.

Since then chips have improved in line with the prediction of Gordon Moore, Intel’s co-founder. [...] A modern Intel Skylake processor contains around 1.75 billion transistors—half a million of them would fit on a single transistor from the 4004—and collectively they deliver about 400,000 times as much computing muscle. This exponential progress is difficult to relate to the physical world. If cars and skyscrapers had improved at such rates since 1971, the fastest car would now be capable of a tenth of the speed of light; the tallest building would reach half way to the Moon.

It's amazing how, for more than 50 years, we've relied on an empirical law to set the pace of a whole industry. Moore's predictability has allowed, in some way, to foresee in the future and envision when - currently impossible - applications might ready to ship. I don't know how much of this slow down has to do with physics or with the amount of capital required, but it remains an outstanding achievement that we've managed to keep up with it for such long period of time.

This piece embraces Moore's law slow down in a really clever way and outlines the areas where the industry might focus on in the future, instead of raw speed.

  • Slowing progress in hardware will provide stronger incentives to develop cleverer software.
  • Reliance on the cloud as the way to deliver better services over the internet.
  • New computing architectures (also in the cloud) optimized for particular jobs.

Same way we used to approach AI with this "brute force" mindset and it turned out that the what actually worked was something more "human", maybe hardware will also become more powerful "just in different and more varied ways".

The sadness and beauty of watching Google’s AI play Go

At first, Fan Hui thought the move was rather odd. But then he saw its beauty.

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.

It's both exciting and terrifying see how we are able to teach machines how to think. But, to me, the most remarkable feat is how despite we are using algorithms that emulate the way we learn, machines are developing their own way of thinking. Fanu Hui thought it was not a human move, which seems like an obvious statement, but it outlines an amazing reality, which is that the current state of the machine's mind followed a development path with no human intervention at all.

It inevitably reminded me of PlaNet the deep-learning machine (also developed by Google fellows) that worked out the location of almost any photo using only the pixels it contains. It plainly beat humans guessing photo locations, but it didn't rely on some of the cues we are used to, instead "We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.