I hope that these will make you curious to sip wisdom of some of the leading software architects. Here are some for the thirsty:
Architecting is about balancing: “In summary, software architecting is about more than just the classical technical activities; it is about balancing technical requirements with the business requirements of stakeholders in the project.“
Architectural Tradeoffs: “Every software architect should know and understand that you can’t have it all.“
Challenge assumptions – especially your own: “Facts and assumptions are the pillars on which your software will be built. Whatever they are, make sure the foundations are solid.“
A word of warning, don’t expect a technical recipe of coding, it might appear elusive and witty as any of the old aphorisms, this is a modern apophthegmata laconica. You might wonder why 97 and not 78 as in the past, or any other number.
It’s a strong prime
Which is, of course, true, but neither particularly useful nor the actual reason. It’s 97 because that is conveniently close to 100 without actually being 100 or trying too obviously not to be (e.g., 99 and 101). It’s around 100 because that allows for a diverse range of short items, each occupying two pages in printed form, and amounts to a reasonably sized book. The specific number 97 was chosen by Richard Monson-Haefel, editor of 97 Things Every Software Architect Should Know, the first book in the series – by definition, all other books in the 97 Things series are somewhat bound to follow the mould!
Significantly fewer items and either the items would be longer, less diverse and more like ordinary articles, which would discourage people from contributing, or the resulting book would be more like a pamphlet. Significantly more items and the items would either have to be shorter, making them little more than abstracts, or the resulting book would be too long for what it’s trying to do.
BTW there is also a webcast (10 Things Every Software Architect Should Know).
A few events bring us to question if we are witnesses of TV turning point – TV quo vadis (where are you going)? I don’t intend to paint a complete landscape, instead I am picking hastily a few facts to probe the weight of existence of such turning point at the present.
The changes are multiple, at one end the user experience, others being the technology, the distribution of content and nonetheless a massive change of the business model.
I think that I have enough examples of experiences that attempted to change the TV as we know it today. To enumerate just a few of them like WebTV, AppleTV, TiVO, all these past experiences shaped the subsequent product definitions and prepared users for the next gadgets. And more recently we see an extraordinary attempt: this time Google TV. Although I am not seeing Google platform as the unique contender still it is interesting to watch the entire phenomena of emerging innovative TV platform.
Let’s probe some of the facts.
Last year Intel bombastically claimed “Future is TV-shaped, says Intel” when announcing its push into the TV business of its CE4100 device; if you want t0 check an overview of Intel’s architecture read “Intel’s Atom heads for digital TVs, STBs“.
This week (more precise on May 20th 2010) I found out that Google launches smart TV service, and I don’t think this is a fade marketing campaign. It seems to me it is a serious attempt to bring change and make a profit out of it. It is not a single player, it is a team of corporations with multiple competencies that allied in deploying a profitable solution.
Let’s take a look at some of the Google’s declarations: “There is no better medium to reach a wider and broader audience than TV” (for Google advertising business); “We can make your TV into a games console, a photo viewer or a music player“. The TVs and boxes will also use Android and will rely on an Intel microprocessor, with the partnership of Sony TV manufactures, LG peripherals and STB. A critical editorial of the Engadget on Google TV highlights the shortcomings of the demo, which somehow it’s not surprising considering the lack of knowledge of some partners in the TV domain. It is important to highlight the distribution of the content over the Internet. Not a totally novel idea, it is impressive to see the alignment and massive value proposition for a TV product; although specifications are not fully disclosed yet, something will soon emerge. It is interesting to follow the announcement (all of them issued on March 20 2010) – Google and DISH Network collaborate to develop integrated multichannel TV and web platform.
We should not forget the observers which might play their role in the future in determining the deployment success: Reuters records “CBS Corp (CBS.N), for one, is keeping an eye on Google TV. “As content owners we applaud innovation,” said Zander Lurie, senior vice president of strategic development at CBS.
“On the business model side, we are more prudent about how we evaluate new technologies and how deep we dive in,” he said.”“
Google TV is not the only recent announcement, recently TiVo and Technicolor Team Up to Offer Integrated PVR Solution: “As the convergence of linear television and broadband continues to take hold, operators need to deploy advanced television solutions that are cost efficient and ready for rapid deployment”; “As one of the leading set-top box providers in the world, operators were increasingly looking to Technicolor to help address this need. To manage this, we selected TiVo’s truly comprehensive solution for marrying TV and Internet content within a single, user-friendly and intuitive interface. TiVo’s vast understanding of what television viewers want, coupled with our expertise in manufacturing hardware and the platform porting work we are now doing, will be a major advantage for operators looking to leapfrog the competition.”
The idea that I want to emphasize is that something is changing in TV business. Competitors are numerous and it is hard to tell if Google TV partnership will be successful or adopted by the market, I concur with Barton Crockett quoted by Reuters “even if Google TV fails, someone will figure this out”.
The demo has shortcomings which I see as mere results of a complex problem to solve. I am not interested further to explore what those are, instead I prefer looking into potentials of this offer.
Let’s try to understand some of the players interests in this Google TV and the part they play:
However, all these announced partnership will not preclude others to jump in, like more ASIC manufacturers, more TV and STB vendors, and more content providers.
Ideas are great, technology is great but all of this is not sufficient. It is necessary to come-up with business models that bring together an ecosystem to have the ideas, technology deployable and finally distributed to the masses of users. Today is easier to have a change as the TV business might end being in a crisis similar to what we see for published or music business. There is eagerness of certain companies to pursue new products as their past portfolio is drying, others because their business model is not actual anymore and some are just expanding their reach.
Consumers are evolving in their level of expectation, they are more aware of their expectations, more knowledgeable and more curious to explore new TV usage. The user is more active in his selection and he is not always happy with the content broadcasted, and with what is paying for it. We see the emergence of new providers like Hulu and Netflix because consumers mood is changing.
What would be needed to be possible to become successful? There might be a couple reasons.
Large partnership is required to push major change of technology usage to the market. It is not possible for a single company to pursue such major change because of the hegh level of complexities and perhaps it is not allowed by the rest of the traditionally ecosystem to have a single company reaping the entire outcome.
First of all better technology are supporting more complex features at lower costs. There are many off the shelf components that could become the basis for a platform, it is a commoditization period. Google will bring commoditization into the TV which eventually would bring some more change into traditionally closed TV and STB platforms and this will pose tremendous pressure to current market players way of operating. There will be more features, more components, more players and collaborators, more competencies and nonetheless more services.
It is an interesting time for TV and we are witnesses of its transformation. What aspects I missed or I am being wrong about?
Today Apple unveiled iPad mobile device concluding long time rumors. This product was delayed many times, there have been quoted technical qualities. I incline to believe that Apple was also waiting to take advantage of the market. Bill Gates was seeing the tablet awhile ago, but it seems that it was too less benefit out of the investment.
There is a case of economics crisis of the printed business and there are easy to detect new ways to sell content. There is a flurry of e-readers, Kindle from Amazon, Sony ebook, Barnes And Noble ebook, it seems there is a business case as we see a plethora of e-readers.
But the most interesting case might come from Apple. Not long time ago Business Week was commenting that I found in
And if the reports of Apple’s discussion to land print media content in the iTunes store are true, how about an easy-on-the-eyes display for reading electronic magazines and books?
There is almost no surprise seeing that Apple is not in the business of selling just the devices, is selling services to it. We are spectators of the nascent book distributor chain (iBook story). Catalysis for some players and upset for others. A lot of landscape changes, there is a lot of dynamism.
There is magic, first of all this is possible because there is a shift at the end user which is ready for this, and is asking for it. And there is the current crisis that is generated by the consumer habits changes which shifts its content distribution, how much content is possessed and carried. The enhancer will be the reading experience (I still enjoy reading printed paper, the e-content is just more convenient to carry and to retrieve).
There is no doubt there is expected to sell more e-news papers, ebooks. One question will be who will win shares of distribution channel. I guess that Apple will will try to capitalize the user experience to lock customers to their distribution. It is just not enough to sell just devices!
There is impressive that came the time to see Apple equipping its devices with their own silicon. Finally is rolled-out their investment in PA Semi (acquisition by Apple of a processor company) and Imagination (a 3D IP supplier, Apple is a stakeholder in the company).
I can imagine a direct big business loss for CPU devices providers once that Apple discontinuing their services. And nevertheless the direct threat for Intel’s netbook line of products …
It is amazing how well are synchronized the product, the silicon and alliance forging. Vibrant presence that produce significant shifts in the market. I will continue watching the measure of changes.
You might think that I praise too much, let me untune, the product is not perfect, there are chances for competition. Personally I am favoring more the “open” devices, perhaps this might turn ending becoming the main vehicle., the last word will be of the consumer and big content providers.
But you should build your own opinion, there is a nice video – http://www.youtube.com/watch?v=y2Hz8dhQw8Q&feature=player_embedded#, you might enjoy as much as I did. There is a sense of a collaborative team, all selling a neat product. Kudos for them!
There will be many solutions on the market, different merits, but the most important one is that are becoming more affordable (as you said) – E-readers are becoming ‘affordable’ (having tiny dimensions does not help).
I must admit that I expected to be more expensive, like 1k, and it turns to be just a bit more than netbooks, 449$. I am expecting a hit.
Maemo is a software platform developed by Nokia for smartphones and Internet Tablets. This software platform is based on the Debian Linux distribution. This last fact is interesting as Nokia was making an investment in the Symbian platform. Each of the platforms have an open character which can attract participation for sustaining development ecosystems. At this time I do not intend to analyze the ecosystem landscape vis-a-vis to iPhone, Android, WebOs and Windows CE, or why Nokia is supporting multiple platforms. The platform comprises the Maemo operating system and the Maemo SDK. The open character of its architecture provides an opportunity to study it and to analyze some of its decisions. Multimedia Framework is a key component of the Maemo Platform. It is interesting to follow the evolution of the multimedia architecture for Maemo 2, Maemo 3 and Maemo 5.
The media server daemon is removed from the latest Architecture documentation having the GStreamer assume a more central responsibility. The Gstreamer is a rich multimedia framework that provides application the ability to treat uniformly a variety of hybrid system use cases (it would be nice to have an accurate requirement spec). The OpenMax Intergration Layer software package is introduced along with the GStreamer framework. OpenMax IL provides the processing entities, an abstraction of software and hardware resources, exposes the resource constraints for a given scenario instantiation and its processing entities interchange buffers.
Maemo relies on TI’s OMAP platform, resourced with an ARM – DSP dual processor, with GPU (Graphic Processing Unit) and ISP (Image Signal Processor). All these computation accelerations are supporting rich multimedia requirements. Also it is interesting to see that TI is promoting its hardware platform and provides its OpenMax and GStreamer software implementation. It is worth to mention the bridge binding ARM CPU to DSP allowing offload tasks from the ARM processor to the DSP; TI provides a rich set of audio and video codecs running on DSP. There is a significant software architecture change since the Maemo 2, where the DSP Gateway was a component serving directly a number of multimedia application components, replaced recently by the TI’s DSP bridge in Maemo 5, now becoming integral to OpenMax IL; DSP bridge is not visible to applications directly and the overall application complexity has been reduced. OpenMax provides an unified interface for those TI codecs and the GStreamer built-in execution threading alleviate the application complexity.
It seems that the Maemo multimedia architecture moved into the right direction.
Linux story is not just about technology development, it is also what it means for a community and what the business is becoming. Linux kernel grew under a new deal of a collaborative effort investment and sharing the technological return; somehow a rebellious mood to push back against the exclusionary and closed systems. Overall it is a major paradigm shift how a business is conduct because this is a project that demonstrates that cooperation can be useful in developing platforms.
There is an interesting talk of Yochai Benkler on the new open-source economics having his thesis that huge cost of developing a product will ultimately lead to a social production with the ownership of the capital largely distributed is different to the well known methods (market and governmental ). Furthermore Benkler says in his Coase’s Penguin, or Linux and the Nature of the Firm paper:
In this paper I explain that while free software is highly visible, it is in fact only one example of a much broader social-economic phenomenon. I suggest that we are seeing is the broad and deep emergence of a new, third mode of production in the digitally networked environment. I call this mode “commons-based peer-production,” to distinguish it from the property- and contract-based models of firms and markets. Its central characteristic is that groups of individuals successfully collaborate on large-scale projects following a diverse cluster of motivational drives and social signals, rather than either market prices or managerial commands.
The top five individual companies sponsoring Linux kernel contributions include:
* 12.3% Red Hat
* 7.6% IBM
* 7.6% Novell
* 5.3% Intel
* 2.4% Oracle
WHY COMPANIES SUPPORT LINUX KERNEL DEVELOPMENT
The list of companies participating in Linux kernel development includes many of the most
successful technology firms in existence. None of these companies are supporting Linux
development as an act of charity; in each case, these companies find that improving the kernel
helps them to be more competitive in their markets. Some examples:
• Companies like IBM, Intel, SGI, MIPS, Freescale, HP, Fujitsu, etc. are all working to ensure that Linux
runs well on their hardware. That, in turn, makes their offerings more attractive to Linux users, resulting
in increased sales.
• Distributors like Red Hat, Novell, and MontaVista have a clear interest in making Linux as capable as it can
be. Though these firms compete strongly with each other for customers, they all work together to make the
Linux kernel better.
• Companies like Sony, Nokia, and Samsung ship Linux as a component of products like video cameras,
television sets, and mobile telephones. Working with the development process helps these companies
ensure that Linux will continue to be a solid base for their products in the future.
• Companies which are not in the information technology business can still find working with Linux
beneficial. The 2.6.25 kernel included an implementation of the PF_CAN network protocol which was
contributed by Volkswagen. 2.6.30 had a patch from Quantum Controls BV, which makes navigational
devices for yachts. These companies find Linux to be a solid platform upon which to build their products;
they contribute to the kernel to help ensure that Linux continues to meet their needs into the future. No
other operating system gives this power to influence future development to its users.
There are a number of good reasons for companies to support the Linux kernel. As a result, Linux has a broad
base of support which is not dependent on any single company. Even if the largest contributor were to cease
participation tomorrow, the Linux kernel would remain on a solid footing with a large and active development
It took personal volunteering until gained weight and height, into becoming an attractor factor. Quite our days a snowball effect. Why? There is the resultant of rising cost of design of adding more and more complex platform features and the price squeeze which will lead commercial companies to rally with the open source phenomena as the last is less driven by the market.
On this token there is an interesting position in Collaboration is the way out of a crisis, says TSMC – IEF 2009 which reflects the mood to reinvent of the industries:
“It has to be made more profitable”, said Marced, “and it can only be done by collaboration. We have to make sure that the whole industry makes more money.”
Marced argued that collaboration reduces waste and shares investment while individual efforts lead to redundant initiatives and heavier investment.
There is also the 2008 revision for those interested in some sort of history snapshot reference of the Linux kernel development.
One success factor of the Internet is carried by the content of the originator of the Internet Protocol Jon Postel recommendation:
“In general, an implementation must be conservative in its sending behaviour, and liberal in its receiving behaviour. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret.”
This recommendation is generalized in a common sense Robustness principle that stays unequivocally within the cooperative spirit (of playing nice): “Be conservative in what you do; be liberal in what you accept from others”.
Von Clausevitz uses the bold statement “Der Krieg ist eine bloße Fortsetzung der Politik mit anderen Mitteln” – war is merely a continuation of politics (with different means). Technology can be paraphrased being a continuation of the business.
The applied technical solutions are sponsored with financial means for financial return. There are standards, stakeholders, active and passive players, and wars. There are not only the technical merits that will impose a winner solution, the entire business context will tell, and in certain cases even the political regulation will come to play a role. The success stories are many times mystified, there are nothing else than personal merits, alignments of the stars and of many persons interest.
I got a few significant references on some of the major software platforms developments. Although there are a many themes that are interesting, but this time I am focusing strictly on the estimated development cost. For example Vista development cost is estimated in 2006 to be about 10 bilion USD, and the estimated total development cost of a Linux distribution to be 10.8 billion USD
The first comments that came to my mind: this is a lot of money! It is just difficult to justify such effort.
All the above are mammoth projects. What about smaller scale projects? Is any way to get a feeling what would be a metrics for a given project? The most handy study material is related to open source projects and there are a number of websites that provides metrics for such open sources projects:
To satisfy my intellectual curiosity I have been checking the GStreamer (a multimedia framework) metrics provided by Ohloh and Koders. There are a number of common sense questions that comes up once that a multimedia framework is considered. What is the cost of it? Would be build in house, purchased or open source? What are the legal liabilities? The GStreamer home page provides limited information to such questions, however the code is available for study for gathering more information.
For the software licenses structure and programming languages distribution and usage for this project the Ohloh analysis is providing the following information:
|Language||Code Lines||Comment Lines||Comment Ratio||Blank Lines||Total Lines|
Development Cost for multimedia plugins codecs is estimated to be 1.1 mil USD (as provided by Koders analyze), however this is not a real life case and I am expecting the costs to be higher. I would guess it is is based on the Constructive Cost Model (COCOMO).
|Lines of code:||// 220,582|
|Person months (PM):||220.58|
|Effort per KLOC:||1.00 PM|
An interesting complementary view is provided by Coverty architecture library. The Coverty Architecture Analyzer tool uses information gathered during the build of a codebase to create a comprehensive list of interdependencies in the code and to generate diagrams like the shown below.