Conference Calls on iPhone

I do a lot of conference calls. I love the convenience of tapping a number almost anywhere on my iPhone 6 and having it dial the number, whether it’s from a meeting in my calendar or in an email. But most conference call numbers require a pin or meeting code to connect to the meeting and for the longest time I was frustrated with having to remember the code. I finally figured out the easy way to handle the conference call and pin number.

You can provide the phone number and conference pin to the phone app if you format it like this:


That’s the call-in number, semi-colon, conference pin and pound sign. The semi-colon inserts a hard break or pause, allowing the phone to dial just the initial number. When you tap on a number that’s properly formatted, the iPhone will dial the number and the conference pin will show up in the bottom left-hand corner next to the word “Dial…”


Listen for the conference call service to answer and prompt you for the pin, then tap “Dial” and the pin number will by typed in. No more remembering the pin!

Some conference scheduling software formats the number such that this works without any changes, and for any invitations you create you can now make sure to format them the right way.

If you know you need to call in using a number that’s not formatted properly, you can still set things up so you don’t need to remember the pin. Copy the phone number and pin from wherever they are, email, document, etc., and paste it into a note in the Notes app. Format it as described above, then copy the whole thing.

Open the phone app and tap and hold for a second in the white area above the number pad. When you do, the paste menu will show up allowing you to paste in the phone number plus pin.


The phone will dial and the pin will show up in the bottom left-hand corner as before. Again, no more remembering the pin!

Is IoT All Win-win?

A recent Freakonomics podcast got me thinking about the creative destructive potential of the Internet of things. Thinking simply, there are three reasons business will embark on IoT projects: save money, make money, and help with regulatory requirements.

If you think about the potential of those, they sound great. The creative efforts there don’t seem obviously destructive for many use cases because we’re adding sensors and collecting data in places we just didn’t watch before. It sounds like a story with no losers because efficiency is an all upside conversion, right?

As I thought about it, I realized it’s not that simple. Here is an example.

If you’ve every been to a wine bar, you’ll know they sell wine by the glass and by the bottle. The wines at a wine bar are higher-end than a typical bar, so when they open a bottle to give a customer one glass, they are taking the risk that they may not be able to sell every glass in that bottle. There are tons of varieties and once opened the bottle only stays fresh and sell-able for so long. This can create a large and expensive waste of wine.

What if new IoT corks with sensors in them could track the quality of the wine and time since each bottle has been opened? The wine bar owner could then have a daily report of what is available and offer specials on opened wine they need to sell to get the most out of the bottles they have already opened.

It sounds all win-win, right? Customers get a daily special, the owner wastes less wine.

But this probably results in the wine bar owner buying less wine because they are wasting less. That’s where the savings comes from. So the wine sellers are likely to sell less wine over time because in the current system, the amount they sell includes the wasted wine.

Like the piano makers at the turn of the last century, it seems there are few acts of creative destruction that can be purely, or even mostly, win-win. In the past, these disruptions have always led to more and better opportunity. Depending on your view, this may still be the case with the internet revolution in general. In the IoT business specifically, at least in the near term, the huge amount of work to implement and get the full value from the new data should offset the losses from the creative destruction. The longer-term impact is much less clear.


Father’s Day Gifts Through the Generations

As I happily opened my Father’s Day gifts yesterday, I looked at my shiny new keychain and said, “Now I just need some more keys to put on this. But in a few years we won’t even have keys anymore, IoT will turn my cell phone into my keys.”

My wife and I then started discussing Father’s Day gifts through the years and we both remember exactly what we made in school for Father’s Day when we were kids: ashtrays. It’s amazing to think about it now, when smoking is relegated to a dwindling number of designated areas, but back then everyone smoked. To get a feel for how pervasive it was, just watch an episode of Mad Men. Now we need to explain to our kids what an ashtray is.

How long before we hold up something like a brass key as a mysterious artifact of the past, something you only see at your grandparents’ house?

My daughter didn’t miss a beat and said, “If that happens, then we’ll just make you a cell phone case.”

And of course she’s exactly right. I can’t wait for the 3D printers to hit our schools and art/shop classes.

IoT and Network Neutrality

In all of the news around the FCC’s net neutrality deliberations, I haven’t seen much discussion around what the network means to the future of the Internet of Things (IoT). The focus for now is on the current users of the majority of the internet bandwidth, Netflix and other video providers, and rightfully so. Compared to video streaming, the simple messages passed back and forth to network connected devices are an order of magnitude smaller. So why discuss network neutrality in the context of IoT?

As IoT technologies expand into industry, companies will rely more and more on the real-time data points from throughout their infrastructure. This data will become essential to understanding what is happening at any given time, so all of those IoT messages and what they tell the business will become valuable. As more things are monitored, the number of devices and volume of data from those devices will also increase.

At some point will these messages get important enough that businesses will pay for better networks and higher delivery rates? The answer is yes, and the solution at that point will be private networks that the business controls, because it will be worthwhile to do so. Companies like the France-based Sigfox are already building alternate networks to serve these needs in some parts of the ecosystem. But even these offerings will rely on the open internet to ultimately get data to a customer’s servers.

The open internet needs to remain open for us to get to that point. Consumer services and entry level devices and services for small businesses need to run well enough on the open internet for users to get value and for the IoT ecosystem to develop without being crippled by potential network taxes. Even large companies need to be able to focus on deploying new IoT infrastructure without worrying about whether their real-time data is really as close to real time as they need it to be.

ISPs, as network experts, need to see this for the opportunity it is. As network use expands, the pie of users gets bigger, which increases the opportunity for them. Setting up limits on networks will only slow development of IoT and hobble the growth the ISPs can tap into, ultimately leading to less profit going forward.

The network neutrality debate may become a huge factor for the development of IoT in addition to all the other areas it will impact. For the sake of the amazing potential of this new platform, networks need to remain ubiquitous, stable, and neutral. There are plenty of other parts of the IoT system we do need to figure out without spending time on the parts that are already solved.

Testing in Perspective

With his keynote at RailsConf this week, David Heinemeier Hansson kicked the test-driven development (TDD) hornet’s nest again (video is linked at the bottom), re-igniting the always simmering debate over the role of testing in software development. While I don’t believe any of his arguments were particular new, he did talk about how he personally arrived at his current opinion on the role of testing and exposed his audience of Rubyists to the idea that the TDD gospel can be questioned.

I have also evolved to take a pragmatic approach to tests, nicely articulated by Kent Beck (DHH also cited this in his talk):

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it.

– Kent Beck on stackoverflow

This quote is so popular because it asserts the proper perspective in a succinct and pragmatic way. The product is the project, not the tests. Use your judgement and write enough tests to serve the needs of your project. This pragmatic approach is shared by anyone who has ever been responsible for delivering an entire working product to a paying customer.

The “entire working product” part is important because this seems to be the part that unit-test enthusiasts miss as they focus almost exclusively on low-level interfaces. Most people I’ve talked to who have worked on big projects can tell you that bugs are found in core features by QA testers or customers on products with very high levels of test coverage.

Most projects have limited time and you need to get the most from the time you have.  If you’ve spent some of that time pursuing 100% unit test coverage, to the exclusion of system testing or other improvements to the product or code, it’s almost certainly not the best use of that time.

How Many Tests?

Zero is too few, and 100% code coverage (which still doesn’t cover everything) is too much. What’s just right?

The approach articulated in Beck’s comment is to optimize the value you get from the time you spend writing tests versus the time you spend writing code. So the question of “How Many Tests?” can be answered, “Enough for you to achieve the benefits of automated testing,” many of which are about saving time on the larger project. To name just a few:

  • Automated tests give developers confidence in adding new features or refactoring existing code because running the test suite gives an initial baseline that core functionality hasn’t been compromised.
  • Tests can help define a feature or interface, helping the developer verify they have provided all requested features, especially on a large team.
  • Tests written to exercise specific bugs protect against regression, at least for those specific issues.

All of these are benefits because they ultimately save time. However, you begin to lose some of this time savings if you spend too much time pursuing 100% test coverage.

Designing Testable Code

Another point DHH makes in his keynote is that he feels that making design decisions solely to serve the purpose of testing can lead to clunky or poor code design. Again, my initial thought was that this wasn’t particular controversial and he was just stating one of many factors that any working programmer already considers every day. But TDD does speak to the role of testing in code design, so I can see where some developers would take issue with designing code that might be hard to test.

In addition to providing the benefits noted above, one of the original promises of TDD is that it would actually lead to more modular, and therefore better designed, code. Like all other best practices, this isn’t universally true. Programmers have to think about a hundred different factors when designing code, all while dealing with the fundamental challenge of creating the new features needed for their product and making them work right. Making the code testable is just one of these factors, and it does come naturally if you are testing as you go along.

As an example, brian d foy has popularized the concept of a “modulino” in his Mastering Perl book, in code, and in various talks and presentations. This design pattern makes command-line scripts written in Perl much more testable by encapsulating the code in methods you can call from test files. When writing a command-line script, you need consider whether the additional code, and perhaps slight bit of obfuscation, outweighs the benefits of easier tests.

  • Is the script complicated with lots of options? Set it up as a modulino so you you can write some tests for the hard bits.
  • Will it be around for a long time with many users? Use a modulino to make it easy to add tests in the future as it expands.
  • Is it a very simple script with limited functionality? Maybe skip the extra modulino setup.

The point is that testability is one of many factors and you need to assess and weigh all of these when you’re writing code, including how it impacts the design for you and other future maintainers. More experienced programmers will be aware of more factors and will do a better job assessing the importance of these factors over the life of the code.

Testing against components external to your program, like databases, web services, etc., can lead to some tough decisions in terms of how much to bend your design for testing. It also can lead down the road of mocking parts of your system and how significantly you want to compromise your tests as you start to mock the environment.

When you write software, the goal is to create working, performant software that can be maintained, extended, and expanded over time. You need to do so within the time and budget that makes sense for the end users. When thinking about the patterns and support frameworks you’ll use, including testing, you need to keep this perspective. You need to do some level of cost-benefit assessment to decide how much effort to put into these support structures, and the more experienced the programmer, the more accurate this analysis is likely to be. While there is no question that automated testing should always be part of this analysis, pursuing TDD as a goal in itself can lead to costs out of proportion to the benefits and run counter the overall goal of creating a useful, compelling product for users.