Father’s Day Gifts Through the Generations

As I happily opened my Father’s Day gifts yesterday, I looked at my shiny new keychain and said, “Now I just need some more keys to put on this. But in a few years we won’t even have keys anymore, IoT will turn my cell phone into my keys.”

My wife and I then started discussing Father’s Day gifts through the years and we both remember exactly what we made in school for Father’s Day when we were kids: ashtrays. It’s amazing to think about it now, when smoking is relegated to a dwindling number of designated areas, but back then everyone smoked. To get a feel for how pervasive it was, just watch an episode of Mad Men. Now we need to explain to our kids what an ashtray is.

How long before we hold up something like a brass key as a mysterious artifact of the past, something you only see at your grandparents’ house?

My daughter didn’t miss a beat and said, “If that happens, then we’ll just make you a cell phone case.”

And of course she’s exactly right. I can’t wait for the 3D printers to hit our schools and art/shop classes.

IoT and Network Neutrality

In all of the news around the FCC’s net neutrality deliberations, I haven’t seen much discussion around what the network means to the future of the Internet of Things (IoT). The focus for now is on the current users of the majority of the internet bandwidth, Netflix and other video providers, and rightfully so. Compared to video streaming, the simple messages passed back and forth to network connected devices are an order of magnitude smaller. So why discuss network neutrality in the context of IoT?

As IoT technologies expand into industry, companies will rely more and more on the real-time data points from throughout their infrastructure. This data will become essential to understanding what is happening at any given time, so all of those IoT messages and what they tell the business will become valuable. As more things are monitored, the number of devices and volume of data from those devices will also increase.

At some point will these messages get important enough that businesses will pay for better networks and higher delivery rates? The answer is yes, and the solution at that point will be private networks that the business controls, because it will be worthwhile to do so. Companies like the France-based Sigfox are already building alternate networks to serve these needs in some parts of the ecosystem. But even these offerings will rely on the open internet to ultimately get data to a customer’s servers.

The open internet needs to remain open for us to get to that point. Consumer services and entry level devices and services for small businesses need to run well enough on the open internet for users to get value and for the IoT ecosystem to develop without being crippled by potential network taxes. Even large companies need to be able to focus on deploying new IoT infrastructure without worrying about whether their real-time data is really as close to real time as they need it to be.

ISPs, as network experts, need to see this for the opportunity it is. As network use expands, the pie of users gets bigger, which increases the opportunity for them. Setting up limits on networks will only slow development of IoT and hobble the growth the ISPs can tap into, ultimately leading to less profit going forward.

The network neutrality debate may become a huge factor for the development of IoT in addition to all the other areas it will impact. For the sake of the amazing potential of this new platform, networks need to remain ubiquitous, stable, and neutral. There are plenty of other parts of the IoT system we do need to figure out without spending time on the parts that are already solved.

Testing in Perspective

With his keynote at RailsConf this week, David Heinemeier Hansson kicked the test-driven development (TDD) hornet’s nest again (video is linked at the bottom), re-igniting the always simmering debate over the role of testing in software development. While I don’t believe any of his arguments were particular new, he did talk about how he personally arrived at his current opinion on the role of testing and exposed his audience of Rubyists to the idea that the TDD gospel can be questioned.

I have also evolved to take a pragmatic approach to tests, nicely articulated by Kent Beck (DHH also cited this in his talk):

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it.

Kent Beck on stackoverflow

This quote is so popular because it asserts the proper perspective in a succinct and pragmatic way. The product is the project, not the tests. Use your judgement and write enough tests to serve the needs of your project. This pragmatic approach is shared by anyone who has ever been responsible for delivering an entire working product to a paying customer.

The “entire working product” part is important because this seems to be the part that unit-test enthusiasts miss as they focus almost exclusively on low-level interfaces. Most people I’ve talked to who have worked on big projects can tell you that bugs are found in core features by QA testers or customers on products with very high levels of test coverage.

Most projects have limited time and you need to get the most from the time you have.  If you’ve spent some of that time pursuing 100% unit test coverage, to the exclusion of system testing or other improvements to the product or code, it’s almost certainly not the best use of that time.

How Many Tests?

Zero is too few, and 100% code coverage (which still doesn’t cover everything) is too much. What’s just right?

The approach articulated in Beck’s comment is to optimize the value you get from the time you spend writing tests versus the time you spend writing code. So the question of “How Many Tests?” can be answered, “Enough for you to achieve the benefits of automated testing,” many of which are about saving time on the larger project. To name just a few:

  • Automated tests give developers confidence in adding new features or refactoring existing code because running the test suite gives an initial baseline that core functionality hasn’t been compromised.
  • Tests can help define a feature or interface, helping the developer verify they have provided all requested features, especially on a large team.
  • Tests written to exercise specific bugs protect against regression, at least for those specific issues.

All of these are benefits because they ultimately save time. However, you begin to lose some of this time savings if you spend too much time pursuing 100% test coverage.

Designing Testable Code

Another point DHH makes in his keynote is that he feels that making design decisions solely to serve the purpose of testing can lead to clunky or poor code design. Again, my initial thought was that this wasn’t particular controversial and he was just stating one of many factors that any working programmer already considers every day. But TDD does speak to the role of testing in code design, so I can see where some developers would take issue with designing code that might be hard to test.

In addition to providing the benefits noted above, one of the original promises of TDD is that it would actually lead to more modular, and therefore better designed, code. Like all other best practices, this isn’t universally true. Programmers have to think about a hundred different factors when designing code, all while dealing with the fundamental challenge of creating the new features needed for their product and making them work right. Making the code testable is just one of these factors, and it does come naturally if you are testing as you go along.

As an example, brian d foy has popularized the concept of a “modulino” in his Mastering Perl book, in code, and in various talks and presentations. This design pattern makes command-line scripts written in Perl much more testable by encapsulating the code in methods you can call from test files. When writing a command-line script, you need consider whether the additional code, and perhaps slight bit of obfuscation, outweighs the benefits of easier tests.

  • Is the script complicated with lots of options? Set it up as a modulino so you you can write some tests for the hard bits.
  • Will it be around for a long time with many users? Use a modulino to make it easy to add tests in the future as it expands.
  • Is it a very simple script with limited functionality? Maybe skip the extra modulino setup.

The point is that testability is one of many factors and you need to assess and weigh all of these when you’re writing code, including how it impacts the design for you and other future maintainers. More experienced programmers will be aware of more factors and will do a better job assessing the importance of these factors over the life of the code.

Testing against components external to your program, like databases, web services, etc., can lead to some tough decisions in terms of how much to bend your design for testing. It also can lead down the road of mocking parts of your system and how significantly you want to compromise your tests as you start to mock the environment.

When you write software, the goal is to create working, performant software that can be maintained, extended, and expanded over time. You need to do so within the time and budget that makes sense for the end users. When thinking about the patterns and support frameworks you’ll use, including testing, you need to keep this perspective. You need to do some level of cost-benefit assessment to decide how much effort to put into these support structures, and the more experienced the programmer, the more accurate this analysis is likely to be. While there is no question that automated testing should always be part of this analysis, pursuing TDD as a goal in itself can lead to costs out of proportion to the benefits and run counter the overall goal of creating a useful, compelling product for users.

New Ideas, From Near and Far

NPR ran a story recently on why we miss creative ideas that are right in front of us. Summarizing the research, people rate ideas that they believe came from far away as more creative than ideas they believe came from close by. As someone who has frequently tried to sell new ideas to coworkers and management, this seems pretty plausible as one of the many obstacles to change.

Resisting Innovation

Almost all companies, from the CEO to HR to the customer support group, say they support innovation in all areas of the company. Who doesn’t love new ideas?

Well, the truth is almost everyone fears and resists change. In our tech-driven culture, it’s become almost politically incorrect to suggest that disruptive innovation isn’t welcome, but that’s the reality. People like to do what they have always done. When a business is involved they will point to “what has always worked” and fight to maintain the status quo for fear that any changes will destroy the business.

So the first challenge you face is that despite the stated love for new ideas, most people will try to shut them down regardless of whether they came from near or far.

Not Invented Here

New ideas you may have picked up from some outside source can also run into the commonly observed “Not invented here” syndrome. This is resistance to ideas, products, or solutions that have come from outside the company because of the belief that the people in the company can do better (and a host of other reasons).

This seems to contradict the far and near research, but in the NPR story, Vedantam suggests one theory for the different reactions to a new idea is your frame of mind when considering it. Things nearby seem more concrete, leading you to think about things like implementation details which are more likely to lead you to shoot holes in the new idea. When you’re thinking about something that came from far away, however, you’re “in a more abstract frame of mind” which allows you to think about possibilities without getting bogged down in details.

Presenting solutions that involve tools (like software packages) or techniques from outside the company may quickly lead technical people to thinking about the concrete details, making them more likely to see all of the problems. So the “Not invented here” reaction could be inspired by the same dynamics as those demonstrated in the near and far research.

Ask the Expert

So what’s the answer? Just forget your new ideas?

One approach I’ve had success with is finding outside consultants to help sell a new idea and possibly help with implementation. Even if you are an expert in the topic and you know how to implement the idea yourself, bringing in outside consultants who believe in the idea as much as you do can be effective.

Consultants can push through resistance in two ways. They can provide the “idea from far away” that can help management think about the idea abstractly and see the possibilities. If you can find consultants from out of town who have to fly in or webex to help with the pitch, even better.

On the reverse side, if your consultants are well known experts in their fields–and they should be–they can help overcome some of the resistance from co-workers worried about implementation details. Consultants have the experience of having successfully implemented the idea before and your co-workers will likely get a kick out of working with experts in the field.

So consultants can provide the far away perspective to help the big picture selling of the idea and the real-world issues of implementing it. On top of that, your manager might just be thinking about all the work you aren’t going to be able to get done if you’re working on the new idea. Consultants don’t invoke the same sort of resistance. Using consultants for part of the implementation allows you to come up with a fixed cost that ends when the engagement is over.

Implementing change and getting buy-in on new ideas is a popular topic, so much so that there a whole industry around it including popular books. This is just one idea I’ve had success with. I’m not a consultant and don’t have any vested interest in promoting them, but the way they can act as agents of change can make them useful even when you already know what you want to do—maybe especially when you already know.

If you’re interested in more on the rise of consulting, the folks at Freakonomics did a podcast on it. Good luck with your new ideas!

The Winter Olympics and 4 Years of TV Everywhere

As the Winter Olympics approach, I can’t help but think back to the Vancouver Olympics in 2010. Although I enjoy the Olympics, it’s memorable for me because I headed up a team at Synacor, Inc., at the time and we effectively launched TV Everywhere online during the Vancouver games. HBOGo was soon to follow, but the Olympics was the first real widespread rollout.

Online Olympics

During the summer of 2009, NBC started talking to cable companies and technology providers like Synacor about streaming live Olympic content online for the Vancouver Olympics. TV Everywhere had already started with some experiments, but nothing had been done across the full subscriber base of cable providers and telcos. NBC had an ambitious plan to try to make content available online to everyone who was paying their cable company to watch it on TV.

It was a bit strange that NBC, a broadcast network, would launch an initiative that required a cable subscription. But it shows how much even the broadcast networks rely on cable revenue and of course the rights to the Olympics aren’t cheap. As it turned out, access was based on being subscribed to CNBC and MSNBC, the network’s cable properties.

Managing Access

Synacor is a technology provider for many cable companies and telcos and once we had the basic plan from NBC, we needed to reach out to every customer and figure out how we were going to allow their subscribers to log in to view the Olympics. We had a base of people who already had a login because they used the email service provided via their provider’s broadband package, and we provided the email service. But what about checking what channels they had in their line-up? And then the real curveball, what about video subscribers who didn’t have a broadband package? Or even worse, what about subscribers who got broadband from a telco (DSL) and video from the cable company?

Needless to say, we had our hands full. I spent months on the phone with the tech teams at all of the providers figuring out how to get access to their subscriber data. We needed to map channel line-ups to existing accounts, have a way for new users to provision new accounts, and allow users who didn’t even know they had accounts to reset their passwords.

One of the hardest parts was creating accounts for TV-only subscribers. There were technical challenges because many of the online systems were tied to the user having a broadband internet subscription. Even more difficult was convincing people that these subscribers would even have a way to access the content. How can they watch online video without our broadband account?

Bolting Things Together

The Olympics run on fixed dates and they weren’t going to push them back a few days if we weren’t ready. It was a scramble right down to the last minute to get people set up. We ended up with 14 of our customers signing on and it was a full-time effort to get everything in place.

The authentication and authorization process was a federated identity system using SAML. Basically that means setting up a trust relationship between two parties (web sites) such that a user can get logged in with one (Synacor) to get access to content at another (NBC). Getting one of our clients set up first involved getting all of the user and channel information on a regular schedule from their back-end business system into our identity system. Then we needed to exchange identity information and metadata with NBC to register that customer’s login page with the NBC Olympics website.

It was hard enough to do this for one login, and we had to do 14. NBC set up a partner portal to help with the process and by the end, my team had the process down pat. Regardless, we were still putting things in place at the last minute as our customers rushed to complete things on their end.

After a few late nights, my team had everything in place and we waited and watched our monitoring systems as the opening ceremonies started.

Let the Games Begin

Thanks to the efforts of my team, we launched on time. We watched and worried and wondered if we had enough capacity to handle the load. We wondered what would happen if the many backend systems we had integrated started to slow down or stopped working altogether. And we looked ahead to the men’s hockey final at the end of the games and wondered what the volume would be 15 minutes before the gold medal game started.

As it turned out, we didn’t need to worry. For us, the load never used more than 10% of our capacity. And as the games rolled forward, other providers did have problems, so rather than risk having angry subscribers, NBC eventually opened up access to some of the more popular events. Regardless, we were happy to be able to say that throughout the events all of our customers had fast logins with no problems.

Still Work to be Done

The online numbers generally were low and there are a few reasons. For the U.S., the games were close to the same time zone making it easy for NBC to put events on linear TV during times when people were home. The events were limited to hockey and curling, so not all content was available. Finally, sports in general has a different usage profile than other content. People want to watch it live and the audience is much lower for replays, especially once the outcome is widely known.

From a technical perspective, I viewed it as a big success. We integrated a large number of systems and the technology worked. Users who knew their credentials were able to get in and watch video and other exclusive content.

But other lessons we learned then are still being sorted out and they are essentially the lessons around web identity in general. How can we make it easy for people to log in? Users need to know their credentials, need to be able to self-provision if the don’t have them, and retrieve passwords when they have forgotten. And as much as providers don’t like it, they need to allow users to select their own identity provider even if it’s someone else.

As the winter games are gearing up, NBC has announced they will allow 30-minute free passes to get people engaged with TV Everywhere. Unfortunately, I think the issue is that it’s still too hard for legitimate customers to get logged in. Providers need to streamline their systems, make it easier for users to learn their credentials, and allow subscribers to use other logins they know like Google or Facebook. I’ll be watching NBC’s numbers with interest to see how far TV Everywhere has come in 4 years.