9th April 13
With its value in US dollars reaching $200, Bitcoin has captured the attention and imagination of numerous people throughout the world, spreading from being a technological curiosity used for paying for geeky items and illegal substances to a phenomenon widely discussed even in mainstream media.
However, it seems to me that most of the coverage misses the point of what Bitcoin really is, mostly deceived by the terminology used since it was created. I’m not going to go into neither technical details not the ideas behind its creation, as others have done a much better job of it, but let me explain how I see it from a purely economical standpoint.
Although it’s usually touted as such, Bitcoin is not really a currency, although it has some of its properties — the main actually being the ability to transfer it electronically from one holder to another. But unlike currency its supply is limited — once the number of coins reach a predetermined figure it will not be possible to create (or “mine for”) more, and far before that the processing power required to mine new ones will be so large that the production rate will be zero for all practical purposes. This also means that its value can’t be inflated, as is the case with actual currencies.
This means that Bitcoin also have some properties of a resource or commodity, not unlike gold, water, arable land or something else that has been used, traded and killed for by humans for years and centuries. However, unlike most other resources (actually all that I can think of), Bitcoin a) can be infinitely divided and traded for other resources (or money) in any amount (a bar of gold can be divided down to the level of an atom; below that it’s not gold any more), and b) it has no intrinsic function it can perform which would make it keep some value if it loses its function as a measure of value: gold can be turned into jewellery and electronic parts, water can be drunk, oil can be burned and land can be worked or lived on.
In other words, Bitcoin is really an experiment, something new in the global economy perhaps for the first time since paper money was introduced. We’ll see how it will fare, will it succeed or fail, what it will turn into, and if it’s just the next step in the evolution of the economic instruments.
23rd February 12
The Trouble With Non-tech Cofounders
“I’ve seen the problem with non-tech founders a few times now, different people, different ages and backgrounds, with different levels of skill, but all with the same thing in common: having to rely on someone else to bring an idea from paper to screen. The most common mistake I think people like this make is to think that they know in advance what they need to get built, and once they’ve paid for that to be done, and a website has been delivered, that they then have a business.”
1st February 12
This was originally a comment to the article titled Web Second, Mobile First on Mark Suster’s excellent blog Both Sides of the Table. While Mark has a consistently high level of quality in his articles I was, to be honest, a bit disappointed with this one. First of all, it states a number of observations that seem pretty obvious to me (and therefore, I believe, to most everyone), such as that a smartphone (or “mobile”) is increasingly the first computing device for many new users.
But more importantly, it’s continuing the false dichotomy of “mobile vs. Web”. Why it’s false? Simply because modern mobile devices — at least those with their own ecosystems — are perfectly able to display Web, and it’s becoming extremely easy to develop for both mobile and Web at the same time, with only a few more resources devoted to ensuring the cross-platform compatibility (which are necessary even if you develop only for “Web”, as you need to take into account different browsers and OSs: e.g. if you’re aiming at China, 77% of your visitors will use IE 8.0 and earlier).
If you’re strapped for resources, there is absolutely no need to develop a mobile app and have to depend on the whims of AppStore and other walled gardens out there — you can develop for the Web, and make your front-end switch the styles automatically according to the device it’s viewed on. You say that you love the new LinkedIn mobile app; but have you seen their mobile Web? It’s pretty much as functional as the mobile app, looks just as well, and has probably required only a bit more resources than the “traditional” Web app.
Essentially, in my opinion, a mobile app makes sense only if it requires no Internet connection to operate properly, so it’s perfect for games, fart jokes and similar use cases. For all the examples Mark mentioned in his article — Yelp, LinkedIn, Foursquare etc — ubiquitous Internet is a prerequisite, which means that there is no advantage over a mobile Web app. Actually, with the modern HTML5 features such as local storage, even a less than 100% reliable connection is not necessarily a problem, as some data can be stored locally and used when offline, syncing it back to the server when online.
So, instead of the “Mobile first, Web second” approach, I’d suggest a different strategy to most new startups: “Web (classic and mobile) first, mobile perhaps (if necessary)”.
4th January 12
The new year has barely begun, and I have already started noticing a few tiny trends that might as well be signs of some greater shifts that will develop over the course of the year and beyond:
- Non-programmers learning to code: There was a minor slew of tweets by non-programmers, stating that their New Year resolution is to learn to code, mainly using Codecademy. To be honest I am not surprised, as it seems that in the present job crunch and general downturn the demand for skilled coders is in ever rising demand. Even if you don’t aim to find employment as a programmer, knowing how to code is growing more and more useful for a number of smaller tasks in your daily life. I think that the main obstacle in their resolution will be when they realise that “coding” is not a single thing — there has never been more available languages, platforms and even targeted audiences when it comes to programming than today.
- Getting back to work: This Joy of Tech cartoon is the latest and most visible example of something I’ve been noticing everywhere around me: people are getting back to work. Probably a combination of the ongoing depression and the optimism of the New Year is prompting people to turn away from looking for ways to entertain themselves and look for ways to create some value and have some fun doing it. Of course, this attitude has always been present in startups, but it’s spilling over to the general population.
20th November 11
18th April 11
Because Web was never meant to be developed for.
Originally, Web was intended as a collection of resources — actually, a filesystem of sorts. But it grew out of proportions as it became popular since it allowed users to see color and pictures and animations on the Internet, which was up to that point either limited to plain text, or required some heavy-weight, non-standard applications to be installed on the client. Actually, the name used for the application used to access the Web — a browser — tells a lot about how it was intended to use: to “browse” the resources, not to execute them. Could you imagine what desktop development would look like if you were limited to using just some sort of file viewer to program for it?
Each Web application is, actually, two completely unrelated Web applications. One is executed on the host and is preparing the data for the Web server to serve; but there is another, which is running in each of the users’ browsers, only connected to the former one by asymmetric pairs of requests and responses. Even if it consists only of HTML (and CSS), it still has code being interpreted and evaluated on the client; ajax apps only emphasize this.
So it’s not Web development that is broken; in fact, it is a miracle what has been created by the developers to work around the fundamental limitations of the platform, which was never meant to be one.
(This was originally a comment on a posting on Hacker News, linking to an article titled Web development is just broken.)
10th April 11
For quite some time I’ve been a strong opponent of the notion that we are in a new dotcom bubble. While there has indeed been a significant raise in the starup investments and valuations, my position has been based on the facts that this time the things are a bit different: there is only a handful of companies getting seemingly insane valuations, and none of them are available to the public market.
However, today I have ran into a few articles discussing the plans of the SEC to loosen the rules that define who can invest in non-public companies and how. And if these plans become reality, I am willing to bet that this will very soon lead to a new tech bubble, which might pop even stronger then the Y2K one…
1st March 11
Earlier today, while reading some comments on a post on Hacker News, I had a revelation: ideas, and in particular business or startup ideas, don’t exist. They are just a figment of our imaginations.
An eternal debate has been raging for decades, with those who believe that the business idea is the root of all innovation and progress in business and technology pitted against those who are saying that the idea, while mildly relevant, is completely secondary to its execution, which save a bad idea if done correctly, or spoil a good one if done badly.
But when you think of it — what are the makings of a successful business? There are numerous ways to analyse one, of course; but I’m sure we will all agree that we can isolate three general elements:
- The product is the actual thing that is being sold to the customers and is bringing revenue to the company. It doesn’t have to be an actual physical product — it may be a piece of software, a service, anything that gives enough value to someone’s life that this someone is willing to pay money for it.
- The way this product is actually produced — which could be called its quality or, if you wish, the execution — is another element, separate from the first one. We may have two functionally identical products, but one can easily be better than the other: will last long, will operate smoothly and without delays, etc.
- And the third element, which rounds up our little group, is the market: a more or less clearly defined group of customers which might share different qualities, but the most important one is that they are (at least in theory) interested in our product.
As we can see, there is no idea. So how is it that there are so many debates about something that doesn’t exist?
Actually, I lied a bit — there is something we might call “an idea”, lacking a better word, but it isn’t anything concrete or specific. Most generally, what most people call “an idea” is in reality a more or less vaguely defined combination of the above elements.
In other words, an idea is a concept of a certain product, executed in a certain way, for a certain market. For example, many countries have recently seen a slew of Groupon clones — i.e. the same product and the pretty much same execution as Groupon, only for different markets. Or, applications like The Daily are a known product (a publication) for an old market, only with a new execution (on the iPad).
It’s quite difficult to come up with a truly novel idea, i.e. something that innovates in all three areas. Luckily that isn’t necessary, since it is usually enough to differentiate in only one to gain a significant competitive edge. But often one isn’t enough — while the Groupon clones can work because their model is basically local, previous stabs at local Facebook clones have invariantly failed, since their differentiation — local language — was easily defeated by the original Facebook.
20th May 10
But really, what is the point of Web browser?
Originally, it’s purpose was to format and properly display the documentation which used HTML to mark it up. The first browser — sir Tim’s World Wide Web — didn’t even support images, and tables weren’t introduced until years later.
It’s funny, really. The Web has brought a revolution, providing standards that allowed anyone to produce network-distributed applications with relatively little effort. Its standards are simple to understand and implement, and they require no tools apart from a text editor.
But when it comes to full-scale applications, the browser still leaves a lot to be desired. Its first disadvantage is that you still need to send the whole user interface along with the data, creating a large overhead. There are some advantages for browser-based applications — like no need to update all the client software — but even that is on one side solved by services like AppStore and on the other made moot by the constant upgrade of browsers.
In my opinion, the browser is going the way of the command-line interface. It will always be here (there is console even in Windows 7), but it will be used less and less, only for those who need to quickly set up an Internet-based application or as an entry point to a larger company.
However, we will see more and more native applications for all platforms which will present a nice interface to the user but will heavily rely on a Web-service powered communication with the main service. The best example for that are all the Twitter clients (which are according to some research used by 85% of all Twitter users), which have no other purpose but to serve as a front-end to Twitter. Many Web sites have developed native clients for their data — mostly for iPhone and other mobile devices, but increasingly there are ones for the desktop as well (and devices like iPad arguably start to erase the difference).
27th April 10
Previously, a “technology startup” meant a company founded in order to develop and build a great new technology. Just look at the original startup — Fairchild Semiconductor — or later famous examples like Cisco, Apple or Google — all of them were created to work for years without profit, even without customers, on building and testing some new technology before turning it into a product.
Nowadays, everyone expects startups that are somehow miraculously profitable from day one — as Daniel Markham nicely puts it in his post The Startup Racket. In other industries it is normal for a long time to pass between founding of the company and its profitability — I was shocked when I heard that in pharmacy and biotech it’s quite normal that a company even goes public sooner than turning profit. And it makes sense that in IT those times will be much shorter — but I never expected they will all but evaporate.
VCs are looking for “traction” and profit before considering investing — but where is the R&D supposed to happen? Just look at the hottest current startups: Facebook, Twitter, Foursquare, Dropbox… While they have a ton of users, none of them is really innovating; they’re using existing Web and mobile technologies and stretch them to the brink. Perhaps it’s not completely fair to assert that they’re not innovating — it’s just that tere’s very little technological innovation going on; what we have is confined to usability, social aspect of applications and the like.
But I still think that at one point we’ll have to start inventing again.