Published Poetry – 2000

August 4, 2013

Daniel Stroe – Priza in cer

Editura Clusium

Cluj-Napoca, 2000

Published Poetry – 2012

July 7, 2013

Daniel Stroe – Despre puritate ca devenire

Editura Fundatiei Pentru Studii Europene

Cluj-Napoca, 2012

Book – 97 Things Every Software Architect Should Know

September 19, 2010

I enjoyed reading the 97 Things Every Software Architect Should Know and not long time ago I found that is maintained by the publisher an unedited original text of the book.

I hope that these will make you curious to sip wisdom of some of the leading software architects. Here are some for the thirsty:

Architecting is about balancing: “In summary, software architecting is about more than just the classical technical activities; it is about balancing technical requirements with the business requirements of stakeholders in the project.

Architectural Tradeoffs: “Every software architect should know and understand that you can’t have it all.

Challenge assumptions – especially your own: “Facts and assumptions are the pillars on which your software will be built. Whatever they are, make sure the foundations are solid.

A word of warning, don’t expect a technical recipe of coding, it might appear elusive and witty as any of the old aphorisms, this is a modern apophthegmata laconica. You might wonder why 97 and not 78 as in the past, or any other number.

It’s a strong prime :-)

Which is, of course, true, but neither particularly useful nor the actual reason. It’s 97 because that is conveniently close to 100 without actually being 100 or trying too obviously not to be (e.g., 99 and 101). It’s around 100 because that allows for a diverse range of short items, each occupying two pages in printed form, and amounts to a reasonably sized book. The specific number 97 was chosen by Richard Monson-Haefel, editor of 97 Things Every Software Architect Should Know, the first book in the series – by definition, all other books in the 97 Things series are somewhat bound to follow the mould!

Significantly fewer items and either the items would be longer, less diverse and more like ordinary articles, which would discourage people from contributing, or the resulting book would be more like a pamphlet. Significantly more items and the items would either have to be shorter, making them little more than abstracts, or the resulting book would be too long for what it’s trying to do.

It is not the only compilation of wisdom that O’Reilly is spoiling us, there are more, and there is 97 Things Every Programmer Should Know, and 97 Things Every Project Manager Should Know.

BTW there is also a webcast (10 Things Every Software Architect Should Know).

TV – Quo vadis?

May 23, 2010

A few events bring us to question if we are witnesses of TV turning point  – TV quo vadis (where are you going)? I don’t intend to paint a complete landscape, instead I am picking hastily a few facts to probe the weight of existence of such turning point at the present.

The changes are multiple, at one end the user experience, others being the technology, the distribution of content and nonetheless a massive change of the business model.

I think that I have enough examples of experiences that attempted to change the TV as we know it today. To enumerate just a few of them like WebTV, AppleTV, TiVO, all these past experiences shaped the subsequent product definitions and prepared users for the next gadgets. And more recently we see an extraordinary attempt: this time Google TV. Although I am not seeing Google platform as the unique contender still it is interesting to watch the entire phenomena of emerging innovative TV platform.

Present Facts

Let’s probe some of the facts.

Last year Intel bombastically claimed “Future is TV-shaped, says Intel” when announcing its push into the TV business of its CE4100 device; if you want t0 check an overview of Intel’s architecture read “Intel’s Atom heads for digital TVs, STBs“.

This week (more precise on May 20th 2010) I found out that Google launches smart TV service, and I don’t think this is a fade marketing campaign. It seems to me it is a serious attempt to bring change and make a profit out of it. It is not a single player, it is a team of corporations with multiple competencies that allied in deploying a profitable solution.

Let’s take a look at some of the Google’s declarations: “There is no better medium to reach a wider and broader audience than TV” (for Google advertising business); “We can make your TV into a games console, a photo viewer or a music player“. The TVs and boxes will also use Android and will rely on an Intel microprocessor, with the partnership of Sony TV manufactures, LG peripherals and STB.  A critical editorial of the Engadget on Google TV highlights the shortcomings of the demo, which somehow it’s not surprising considering the lack of knowledge of some partners in the TV domain. It is important to highlight the distribution of the content over the Internet. Not a totally novel idea, it is impressive to see the alignment and massive value proposition for a TV product; although  specifications are not fully disclosed yet, something will soon emerge. It is interesting to follow the announcement (all of them issued on March 20 2010) – Google and DISH Network collaborate to develop integrated multichannel TV and web platform.

We should not forget the observers which might play their role in the future in determining the deployment success: Reuters records “CBS Corp (CBS.N), for one, is keeping an eye on Google TV. “As content owners we applaud innovation,” said Zander Lurie, senior vice president of strategic development at CBS.
“On the business model side, we are more prudent about how we evaluate new technologies and how deep we dive in,” he said.”

Google TV is not the only recent announcement, recently TiVo and Technicolor Team Up to Offer Integrated PVR Solution: “As the convergence of linear television and broadband continues to take hold, operators need to deploy advanced television solutions that are cost efficient and ready for rapid deployment”; “As one of the leading set-top box providers in the world, operators were increasingly looking to Technicolor to help address this need. To manage this, we selected TiVo’s truly comprehensive solution for marrying TV and Internet content within a single, user-friendly and intuitive interface. TiVo’s vast understanding of what television viewers want, coupled with our expertise in manufacturing hardware and the platform porting work we are now doing, will be a major advantage for operators looking to leapfrog the competition.

The idea that I want to emphasize is that something is changing in TV business. Competitors are numerous  and it is hard to tell if Google TV partnership will be successful or adopted by the market, I concur with Barton Crockett quoted by Reuters “even if Google TV fails, someone will figure this out”.

Shortcomings

The demo has shortcomings which I see as  mere results of a complex problem to solve. I am not interested further to explore what those are, instead I prefer looking into potentials of this offer.

Players Interests and their Roles

Let’s try to understand some of the players interests in this Google TV and the part they play:

  1. Google is in the advertising business and it is looking for additional channels of distribution  other than desktops, therefore it is targeting also TV for further expansion. Furthermore Google is a also a content provider with its YouTube. Its overall weight allows Google to sponsor emergent ecosystems. Nevertheless Google provides the Android software platform to attract business partners and software developers. Android platform is Google’s honey pot promising free and rapid development. Google requires traction to reach  TV also.
  2. Logitech and Sony are traditionally integrators/providers of TV or STB devices, and they are interested in filling their pipeline with new products. Historically Logitech was targeting PC peripherals, with TV emerging as a new target now, having more peripherals for it will facilitate to increase usage for it. With Sony targeting premium market share with its futuristic products, I am not sure if anything has changed in their plans.
  3. Intel is a traditional manufacturer of CPU ASIC and its current PC market is hardly sufficient to maintain or even provide the growth anymore. There are an increasing number of features to be integrated in the hardware (like video, 3D Gfx) while keeping the cost reasonable. There is huge computation power and capabilities to what was for the previous decade TV solutions. Intel is fighting for its reputation competing with ARM and its partners (TI, Qualcomm, ST, … ). It is advantageous for the software developing process purposes to skip the cross-compilation step (as it might be for ARM). The reason is that this will bring more and even cheaper software developers to participate in building applications for this platform, although having the Android virtual machine waters down some of this advantage. A TV / STB might not have the power consumption constraints of the mobile and therefore ARM’s perceived power consumption performance advantage is fading in TV.
  4. Dish Network might be worried by the Torrent phenomena, a cannibalizing competitor of its market. It’s current base of customers which needs to keep it content. With its applications and services, Dish Network provides the reach for this future platform of the users .
  5. Adobe partnered vigorously with Google to prove its Flash technology is nothing what Apple is complaining about. More devices carrying this technology there will bring more revenue for Adobe as well.

However, all these announced partnership will not preclude others to jump in, like more ASIC manufacturers, more TV and STB vendors, and more content providers.

More thoughts

Ideas are great, technology is great but all of this is not sufficient. It is necessary to come-up with business models that bring together an ecosystem to have the ideas, technology deployable and finally distributed to the masses of users. Today is easier to have a change as the TV business might end being in a crisis similar to what we see for published or music business. There is eagerness of certain companies to pursue new products as their past portfolio is drying, others because their business model is not actual anymore and some are just expanding their reach.

Consumers are evolving in their level of expectation, they are more aware of their expectations, more knowledgeable and more curious to explore new TV usage. The user is more active in his selection and he is not always happy with the content broadcasted, and with what is paying for it. We see the emergence of new providers like Hulu and Netflix because consumers mood is changing.

What would be needed to be  possible to become successful? There might be a couple reasons.

Large partnership is required to  push major change of technology usage to the market. It is not possible for a single company to pursue such major change because of the hegh level of complexities and perhaps it is not allowed by the rest of the traditionally ecosystem to have a single company reaping the entire outcome.

First of all better technology are supporting more complex features at lower costs. There are many off the shelf components that could become the basis for a platform, it is a commoditization period.  Google will bring commoditization  into the TV which eventually would bring some more change into traditionally closed TV and STB platforms and this will pose tremendous pressure to current market players way of operating. There will be more features, more components, more players and collaborators, more competencies and nonetheless more services.

It is an interesting time for TV and we are witnesses of its transformation. What aspects I missed or I am being wrong about?

References

Apple iPad event and the upturned landscape

January 28, 2010

Today Apple unveiled iPad mobile device concluding long time rumors.  This product was delayed many times, there have been quoted technical qualities. I incline to believe that Apple was also waiting to take advantage of the market. Bill Gates was seeing the tablet awhile ago, but it seems that it was too less benefit out of the investment.

There is a case of economics crisis of the printed business and there are easy to detect new ways to sell content. There is a flurry of e-readers, Kindle from Amazon, Sony ebook, Barnes And Noble ebook, it seems there is a business case as we see a plethora of e-readers.

But the most interesting case might come from Apple.  Not long time ago  Business Week was commenting that I found in


And if the reports of Apple’s discussion to land print media content in the iTunes store are true, how about an easy-on-the-eyes display for reading electronic magazines and books?

There is almost no surprise seeing that Apple is not in the business of selling just the devices, is selling services to it. We are spectators of the nascent book distributor chain (iBook story). Catalysis for some players and upset for others. A lot of landscape changes, there is a lot of dynamism.

There is magic, first of all this is possible because there is a shift at the end user which is ready for this, and is asking for it. And there is the current crisis that is generated by the consumer habits changes which shifts its content distribution, how much content is possessed and carried. The enhancer will be the reading experience (I still enjoy reading printed paper, the e-content is just more convenient to carry and to retrieve).
There is no doubt there is expected to sell more e-news papers, ebooks. One question will be who will win shares of distribution channel. I guess that Apple will will try to capitalize the user experience to lock customers to their distribution. It is just not enough to sell just devices!

There is impressive that came the time to see Apple equipping its devices with their own silicon. Finally is rolled-out their investment in PA Semi (acquisition by Apple of a processor company) and Imagination (a 3D IP supplier, Apple is a stakeholder in the company).
I can imagine a direct big business loss for CPU devices providers once that Apple discontinuing their services. And nevertheless the direct threat for Intel’s netbook line of products …

It is amazing how well are synchronized the product, the silicon and alliance forging. Vibrant presence that produce significant shifts in the market. I will continue watching the measure of changes.

You might think that I praise too much, let me untune, the product is not perfect, there are chances for competition. Personally I am favoring more the “open” devices, perhaps this might turn ending becoming the main vehicle., the last word will be of the consumer and big content providers.

But you should build your own opinion, there is a nice video – http://www.youtube.com/watch?v=y2Hz8dhQw8Q&feature=player_embedded#, you might enjoy as much as I did. There is a sense of a collaborative team, all selling a neat product. Kudos for them!

There will be many solutions on the market, different merits, but the most important one is that are becoming more affordable (as you said) – E-readers are becoming ‘affordable’ (having tiny dimensions does not help).
I must admit that I expected to be more expensive, like 1k, and it turns to be just a bit more than netbooks, 449$. I am expecting a hit.

Looking into the Maemo Multimedia framework

December 28, 2009

Maemo is a software platform developed by Nokia for smartphones and Internet Tablets. This software platform is based on the Debian Linux distribution. This last fact is interesting as Nokia was making an investment in the Symbian platform. Each of the platforms have an open character which can attract participation for sustaining development ecosystems. At this time I do not intend to analyze the ecosystem landscape vis-a-vis to iPhone, Android, WebOs and Windows CE, or why Nokia is supporting multiple platforms. The platform comprises the Maemo operating system and the Maemo SDK. The open character of its architecture provides an opportunity to study it and to analyze some of its decisions. Multimedia Framework is a key component of the Maemo Platform. It is interesting to follow the evolution of the multimedia architecture for Maemo 2, Maemo 3 and Maemo 5.

Maemo2 multimedia architecture diagram

The media server daemon is removed from the latest Architecture documentation having the GStreamer assume a more central responsibility. The Gstreamer is a rich multimedia framework that provides application the ability to treat uniformly a variety of hybrid system use cases (it would be nice to have an accurate requirement spec). The OpenMax Intergration Layer software package is introduced along with the GStreamer framework. OpenMax IL provides the processing entities, an abstraction of software and hardware resources, exposes the resource constraints for a given scenario instantiation and its processing entities interchange buffers.

Maemo 5 Multimedia architecture diagram

Maemo relies on TI’s OMAP platform, resourced with an ARM – DSP dual processor, with GPU (Graphic Processing Unit) and ISP (Image Signal Processor). All these computation accelerations are supporting rich multimedia requirements. Also it is interesting to see that TI is promoting its hardware platform and provides its OpenMax and GStreamer software implementation. It is worth to mention the bridge binding ARM CPU to DSP allowing offload tasks from the ARM processor to the DSP; TI provides a rich set of audio and video codecs running on DSP. There is a significant software architecture change since the Maemo 2, where the DSP Gateway was a component serving directly a number of multimedia application components, replaced recently by the TI’s DSP bridge in Maemo 5, now becoming integral to OpenMax IL; DSP bridge is not visible to applications directly and the overall application complexity has been reduced. OpenMax provides an unified interface for those TI codecs and the GStreamer built-in execution threading alleviate the application complexity.

It seems that the Maemo multimedia architecture moved into the right direction.

Linux kernel development – Some insight on the ecosystem of the commons, and motives

October 1, 2009

Linux story is not just about technology development, it is also what it means for a community and what the business is becoming.  Linux kernel grew under a new deal of a collaborative effort investment and sharing the technological return; somehow a rebellious mood to push back against the exclusionary and closed systems. Overall it is a major paradigm shift how a business is conduct because this is a project that demonstrates that cooperation can be useful in developing platforms.

There is an interesting talk of Yochai Benkler on the new open-source economics having his thesis that huge cost of  developing a product will ultimately lead to a social production with the ownership of the capital largely distributed is different to the well known methods (market and governmental ). Furthermore Benkler says in his Coase’s Penguin, or Linux and the Nature of the Firm paper:

In this paper I explain that while free software is highly visible, it is in fact only one example of a much broader social-economic phenomenon. I suggest that we are seeing is the broad and deep emergence of a new, third mode of production in the digitally networked environment. I call this mode “commons-based peer-production,” to distinguish it from the property- and contract-based models of firms and markets. Its central characteristic is that groups of individuals successfully collaborate on large-scale projects following a diverse cluster of motivational drives and social signals, rather than either market prices or managerial commands.

Thanks to Alun Williams I found an interesting 2009 report on Linux Kernel Development revealing facts on “How Fast it is Going, Who is Doing It, What They are Doing, and Who is Sponsoring It”

The top five individual companies sponsoring Linux kernel contributions include:
* 12.3% Red Hat
* 7.6% IBM
* 7.6% Novell
* 5.3% Intel
* 2.4% Oracle

WHY COMPANIES SUPPORT LINUX KERNEL DEVELOPMENT
The list of companies participating in Linux kernel development includes many of the most
successful technology firms in existence. None of these companies are supporting Linux
development as an act of charity; in each case, these companies find that improving the kernel
helps them to be more competitive in their markets. Some examples:
•     Companies like IBM, Intel, SGI, MIPS, Freescale, HP, Fujitsu, etc. are all working to ensure that Linux
runs well on their hardware. That, in turn, makes their offerings more attractive to Linux users, resulting
in increased sales.
•     Distributors like Red Hat, Novell, and MontaVista have a clear interest in making Linux as capable as it can
be. Though these firms compete strongly with each other for customers, they all work together to make the
Linux kernel better.
•     Companies like Sony, Nokia, and Samsung ship Linux as a component of products like video cameras,
television sets, and mobile telephones. Working with the development process helps these companies
ensure that Linux will continue to be a solid base for their products in the future.
•     Companies which are not in the information technology business can still find working with Linux
beneficial. The 2.6.25 kernel included an implementation of the PF_CAN network protocol which was
contributed by Volkswagen. 2.6.30 had a patch from Quantum Controls BV, which makes navigational
devices for yachts. These companies find Linux to be a solid platform upon which to build their products;
they contribute to the kernel to help ensure that Linux continues to meet their needs into the future. No
other operating system gives this power to influence future development to its users.
There are a number of good reasons for companies to support the Linux kernel. As a result, Linux has a broad
base of support which is not dependent on any single company. Even if the largest contributor were to cease
participation tomorrow, the Linux kernel would remain on a solid footing with a large and active development
community.

It took personal volunteering until gained weight and height, into becoming an attractor factor. Quite our days  a snowball effect. Why? There is the resultant of rising cost of design of adding more and more complex platform features and the price squeeze which will lead commercial companies to rally with the open source phenomena as the last is less driven by the market.

On this token there is an interesting position in Collaboration is the way out of a crisis, says TSMC – IEF 2009 which reflects the mood to reinvent of the industries:


“It has to be made more profitable”, said Marced, “and it can only be done by collaboration. We have to make sure that the whole industry makes more money.”

Marced argued that collaboration reduces waste and shares investment while individual efforts lead to redundant initiatives and heavier investment.

There is also the 2008 revision for those interested in some sort of history snapshot reference of the Linux kernel development.

Robustness – a succes factor

September 30, 2009

One success factor of the Internet is carried by the content of the originator of the Internet Protocol Jon Postel recommendation:
“In general, an implementation must be conservative in its sending behaviour, and liberal in its receiving behaviour. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret.”

This recommendation is generalized in a common sense Robustness principle that stays unequivocally within the cooperative spirit (of playing nice): “Be conservative in what you do; be liberal in what you accept from others”.

Technology – paraphrasing Clausevitz

September 30, 2009

Von Clausevitz uses the bold statement “Der Krieg ist eine bloße Fortsetzung der Politik mit anderen Mitteln” – war is merely a continuation of politics (with different means). Technology can be paraphrased being a continuation of the business.

The applied technical solutions are sponsored with financial means for financial return. There are standards, stakeholders, active and passive players, and wars. There are not only the technical merits that will impose a winner solution, the entire business context will tell, and in certain cases even the political regulation will come to play a role. The success stories are many times mystified, there are nothing else than personal merits, alignments of the stars and of many persons interest.

Software development cost, some measurements and available references

August 26, 2009

I got a few significant references on some of the major software platforms developments. Although there are a many themes that are interesting, but this time I am focusing strictly on the estimated development cost. For example Vista development cost is estimated in 2006 to be about 10 bilion USD, and the estimated total development cost of a Linux distribution to be 10.8 billion USD

The first comments that came to my mind: this is a lot of money! It is just difficult to justify such effort.

All the above are mammoth projects. What about smaller scale projects? Is any way to get a feeling what would be a metrics for a given project? The most handy study material is related to open source projects and there are a number of websites that provides metrics for such open sources projects:

  1. Ohloh and its project search page
  2. Koders and its project search page
  3. Krugle and its project search page
  4. Codase and its search page
  5. Merobase and its search page
  6. JExamples and its search page

To satisfy my intellectual curiosity I have been checking the GStreamer (a multimedia framework) metrics provided by Ohloh and Koders. There are a number of common sense questions that comes up once that a multimedia framework is considered. What is the cost of it? Would be build in house, purchased or open source? What are the legal liabilities? The GStreamer home page provides limited information to such questions, however the code is available for study for gathering more information.

For the software licenses structure and programming languages distribution and usage for this project the Ohloh analysis is providing the following information:

49 files
2 files
Language Code Lines Comment Lines Comment Ratio Blank Lines Total Lines
C 742,710 171,976 18.8% 174,728 1,089,414
XML 124,289 1,124 0.9% 1,678 127,091
C++ 22,364 9,922 30.7% 5,743 38,029
Python 18,885 2,708 12.5% 3,341 24,934
C# 18,022 2,589 12.6% 4,203 24,814
Scheme 13,167 296 2.2% 1,981 15,444
Automake 10,540 1,098 9.4% 2,780 14,418
Autoconf 5,221 1,140 17.9% 1,171 7,532
Perl 4,652 450 8.8% 771 5,873
shell script 3,397 730 17.7% 643 4,770
HTML 2,724 1 0.0% 7 2,732
Objective C 1,624 291 15.2% 508 2,423
Assembly 1,273 301 19.1% 236 1,810
XSL Transformation 1,209 67 5.3% 188 1,464
Make 67 4 5.6% 15 86
cmake 21 0 0.0% 7 28
CSS 13 0 0.0% 0 13

Development Cost for multimedia  plugins codecs is estimated to be 1.1 mil USD (as provided by Koders analyze), however this is not a real life case and I am expecting the costs to be higher. I would guess it is is based on the Constructive Cost Model (COCOMO).

$1,102,910
Assumptions
Lines of code: // 220,582
Person months (PM): 220.58
Functions required: 100.0%
Effort per KLOC: 1.00  PM
Labor Cost/Month: $5000

An interesting complementary view is provided by Coverty architecture library. The Coverty Architecture Analyzer tool uses information gathered during the build of a codebase to create a comprehensive list of interdependencies in the code and to generate diagrams like the shown below.

gstreamer0.10-1


Follow

Get every new post delivered to your Inbox.