Category Archives: Technology

Linux Has Come of Age .. Now, It Is Time To Take The World

An interesting article on not only Linux but OpenSource

TechCrunch Article: RedHat CEO at Linuxcon – I have no idea what’s next

Leave a comment

Filed under Business, Technology

Google’s software stack obsolete

Interesting article. I am not sure about Google’s but this is true for most companies, especially when they are large companies with large investments in their propietary systems. It does take a concerted effort in keeping things evolving. In general it is about constant refactoring, but in general the business needs overwrite some of the engineering imperatives. It is a hard balance to achieve.

Leave a comment

Filed under Business, Technology

iPad, GarageBand and The Fabian Schonholz Experience

Today was an interesting day. While I have a bunch of guitars, I decided to pull out my oldest guitar, the one I have had for 28 years. Do not ask why. Also, I decided to record something using the iPad and GarageBand. I have been working in technology for the better part of the last 35 years and it does not fail to amaze me. Every day new technology hits consumers, bringing some very interesting tools – not always useful however – to us all.

In the case of the iPad, the best is yet to come. The device has endless possibilities and as its hardware improves, so will the opportunities exponentially.

And people … this is the outcome of today’s experimentation with technology!! Enjoy.

For iPad/iPhone click here

1 Comment

Filed under Music, Technology

An Answer That Even An Single Cell Organism Can Understand

By Agustin M Rela
Translated by Fabian Schonholz

I hired Ramiro Rela back in late 2002 or early 2003. The reality is that I did not want to hire him because he did not have the relevant experience I was looking for. I would have had to contend with mutiny had I not hired him. Allowing my other employees to convince me was one of the best decisions I ever made. Ramiro is a scholar and a gentleman, a friend, a fantastic co-worker (when we did work together) and a very intelligent individual with whom I enjoyed pizza, beer or wine and some deep intellectual and technical conversations.

With the CERN turning on the LHC and the rumors around the event – whether a black hole can be produced by it or not – Ramiro asked his father, who is a physicist, to explain what a particle accelerator was and how it worked, all in terms a “single cell” organism would understand. Now … the single cell organisms are the people at the office who were “aptly” concerned about black holes generated by a particle accelerator … and influenced by the WWW’s rumor mill.

I have translated Ramiro’s father’s explanation because I think it is brilliant and it MUST be shared. It explains in very clear terms not only what an accelerator is but some basic quantum physics concepts needed to understand how and why accelerators are built. I have included the original text in Spanish for those who would rather read the original and save themselves from my horrible and butchered translation. Besides, if you can read it in Spanish, even better, since Agustin writes beautifully.

Here it goes …

Dear Ramiro:

Physics studies everything the hard way. It is one of its curses.

In 1916 Max Plank and others discovered Quantum Physics, which establishes that objects with definite contours do not exist and that matter is only diffused energy waves. To make matters worse, these waves can only manifest in packets called “quantum” and as whole packets. This discovery changed the philosophy of Physics, since it wiped the borderline between object and subject and the principle of causality and set a numeric limit to the knowledge of reality.

Mass is equivalent to energy through Einstein’s E = mc2. And the energy of a quantum (of which you can calculate its mass) is E = hf. Where ‘h’ is Plank’s constant and f the frequency measured in hertz – or cycles per second.

On the other hand, in order to see an object you need to use waves smaller than the object. That is why a virus can not be seen with an optical microscope, since the smallest light wave we can see is of 1/1000th of a millimeter and viruses are 100 times smaller. In order to observe them we need to use shorter length waves provided by electrons – let’s remember that electrons are themselves waves. A more familiar example is waves in the ocean of 2 or 3 meters long. They go around the dock posts like they were not there but bounce back once they hit a boat. However, the waves generated by a fisherman’s floater reflect perfectly off the same dock posts.

A high frequency means a short wave length, as we realize that mice squeak but do not roar. And as Planck’s formula tells us the higher the frequency the greater the energy associated to a particle. It is an injustice that in order to see a small particle we need to violently shake it with great energy while to see an elephant a faint sound wave is sufficient.

Historically, large molecules could be seen without instrumentation. Atoms could only be seen if you bombarded them with particles emitted from radioactive sources. Atom’s nuclei can be seen by using a cyclotron, a machine in whose interior electrons gyrate thanks to synchronized attractions and repulsions. With such machine we can obtain high energy electrons, thus, high frequency and short wavelength. As a consequence, these electrons are apt for bouncing against smaller elements than an atom, e.i.: its nucleus.

To see what’s inside a nucleus higher energy particles are needed, in other words, faster particles. Particles are accelerated by attracting and rejecting them with correctly polarized electricity. The largest accelerators are big vacuumed tunnels so accelerated particles would not collide with anything; and they are attracted and repelled by rings of changing polarity. It is as though a horse was running along the tunnel being offered a carrot at one moment and spanked immediately after. Each trainer ought to rapidly change his attraction strategy for a repulsion as soon as the irrational goes by their noses. Such is what the electric poles do in the accelerator.

The famous LHC or Great Hadron Collider is today the largest in the world. It differentiates from the previous in that it can accelerate particles until they have enough energy to see what is inside a particle that belongs to an atom’s nucleus It accumulates up to 5 trillion electron volts per particle. A Hadron is a type of subatomic particle composed of quarks and characterized by strong nuclear interactions; i.e.: protons and mesons. Hadron means “heavy” in Greek.

There is no risk that these experiments may destroy the world. Moreover, the world is constantly being bombarded by cosmic rays with an energy of ten to the twentieth electron volts while the LHC does not even reach ten to the thirteenth of the same units. It was built because there is not time to wait for just the required cosmic ray in the required place needed for an experiment.

Some people call the LHC “The God Machine” since it produces and is used to produce particles that where abundant during the beginning of the universe. The religious tradition associates the beginning of the universe with Creation.

Objections are heard that a large quantity of resources have been invested in basic or pure science with no immediate concrete application. But that always happens. When the first experiments with nuclear energy were conducted, nobody knew that France and Japan were going to obtain today most of the electricity they consume from nuclear energy; nobody knew that cancer was going to be cured with radio-isotopes; or sterilized surgical instrumentation with atomic radiation. And when Isaac Newton imagined in 1684 the placement of an artificial satellite on an orbit around Earth, he set in play the wildest and freest scientific imagination, that today, make it possible for people to be evacuated hours before a hurricane hits; or airplanes to save a third of their gas thanks to weather forecasting; or make it possible for me to receive your questions and for you to receive my answers.


AgustIn M Rela

—- Spanish Version —-

Querido Ramiro:

La física estudia todo a los golpes; ésa es una de sus maldiciones.

En 1916 Max Plank y otros descubrieron la física cuántica, que establece que los objetos de contornos definidos no existen, y sólo hay ondas difusas de materia. Para colmo, esas ondas sólo se pueden manifestar en paquetes llamados cuantos, y no en fracciones. Ese descubrimiento cambió la filosofía, porque barrió la frontera entre objeto y sujeto y el principio de causalidad, y puso límite numérico al conocimiento que se puede tener de la realidad.

La masa es equivalente a la energía a través de la famosa fórmula de Einstein, E=m.c². Y la energía de un cuanto (de la que se puede calcular la masa) vale E = h.f, donde h es la constante de Plank, y f la frecuencia de la onda, en hertz, o ciclos por segundo.

Por otra parte, para ver un objeto hay que usar ondas más chicas que el objeto, por eso no se puede ver un virus con un microscopio óptico, dado que la onda de luz más chica que podemos ver es de media milésima de milímetro, y el virus es centenares de veces más chico. Para observarlo hay que usar ondas más cortas, provistas por electrones. (Recordemos que los electrones son ondas, igual que cualquier otra cosa.) Un ejemplo más familiar es que las olas del mar, de dos o tres metros de longitud, pasan alrededor de los postes del muelle como si no existieran, pero rebotan en un barco más grande. En cambio las olas pequeñas que hace el corcho del pescador se reflejan perfectamente en los mismos postes.

Una frecuencia alta significa una longitud de onda corta, como nos damos cuenta al comprobar que los ratones chillan, pero no rugen. Y como la fórmula de Planck dice que cuanto mayor es la frecuencia, tanto más grande es la energía asociada a una partícula, resulta la injusticia de que para ver cosas muy chicas hay que sacudirlas a lo bruto con gran energía; en cambio para ver un elefante alcanza con muy débiles ondas de sonido.

En la historia, las moléculas grandes se vieron casi a simple vista. Para ver átomos hubo que bombardearlos con partículas emitidas por fuentes radiactivas. Para ver los núcleos de los átomos hubo que usar un ciclotrón, que es una máquina en cuyo interior giran electrones gracias a repulsiones y atracciones sincronizadas. Con esa máquina se obtienen electrones de mucha energía, por tanto, de alta frecuencia; y por eso, de escasa longitud de onda, y en consecuencia aptos para rebotar en cosas más chicas de un átomo; por ejemplo, su núcleo.

Para ver lo que hay dentro del núcleo hacen falta partículas de más energía, o sea más veloces. Se las lleva a gran velocidad atrayéndolas y rechazándolas con electricidad de la polaridad correcta. Los aceleradores más grandes son túneles en los que se hace el vacío para que las partículas no choquen con nada, y se las atrae y repele con anillos de polaridad cambiante. Es como si por el túnel corriera un caballo y le ofrecieran zanahorias por delante, y lo castigaran a patadas por atrás un momento después. Cada entrenador debería cambiar rápidamente su estrategia de atracción por una de repulsión, apenas pase el irracional frente a sus narices. Eso hacen los polos eléctricos del acelerador.

El famoso LHC, o Gran Colisionador de Hadrones, hoy el más grande del mundo, se distingue de los anteriores en que puede acelerar partículas hasta que tengan la energía suficiente como para ver qué  hay dentro de las partículas que forman parte de los núcleos. Acumula hasta 5 billones de electrón voltios por cada partícula. (Algunas fuentes dicen trillones, por error de traducción.) Un hadrón es un tipo de partículas subatómicas compuestas por quarks y caracterizadas por una interacción nuclear de la llamada fuerte; por ejemplo el protón y el mesón. Hadro significa pesado en griego.

No hay riesgo de que esos experimentos destruyan el mundo, porque de hecho todos los días nos bombardean rayos cósmicos de diez a la veinte electrón voltios, mientras que el Gran Chocador no llega a diez a la trece de las mismas unidades. Se lo construyó porque no hay tiempo para esperar que justo aparezca un rayo cósmico en el lugar requerido por un experimento.

Algunos llaman al LHC la Máquina de Dios, porque las partículas que estudia fueron las más abundantes en los comienzos del universo, y la tradición religiosa asocia ese origen con la Creación.

Se oyen objeciones a que se haya invertido una cantidad muy grande de recursos en ciencia básica o pura, o sea sin aplicación concreta a la vista. Pero eso pasa siempre. Cuando se experimentó con la energía nuclear por primera vez, no se sabía que Francia y Japón iban a obtener hoy de ella casi toda la energía eléctrica que consumen; que se iba a curar el cáncer con radioisótopos, o a esterilizar instrumental quirúrgico con radiaciones atómicas. Y cuando Isaac Newton imaginó en 1684 la colocación de un satélite artificial en órbita, puso en juego la más pura imaginación científica libre y desenfrenada, sin sospechar que hoy los satélites hacen que se pueda
evacuar a tiempo una región amenazada por un huracán; ahorran hasta un tercio del combustible de aviación gracias al pronóstico meteorológico, y permiten que me lleguen tus preguntas, y que tellegue esta respuesta.

Un abrazo,

AgustIn M Rela


Filed under Interesting, Technology

From my iPhone

Wow…can you believe it. An iPhone app to write on my blog. This is fantastic and believe it or not, this post is being written on my iPhone. The aspect I will need to figure out is workflow. Normally after I write a post my wife edits it and then it gets published. Such a workflow does not lend itself to writing on the iPhone.

I do see, however, this app as a quick way to jot down some notes and quick ideas that I can later develop into full blog posts. For now … What I need to do is do finish with a few of the blog posts I already started.

1 Comment

Filed under Business, Technology

A Hybrid Solution

In an early blog posts, Building Scalable Web Systems, I discussed very high level some of the needed premises and basis to architecting scalable systems. What the post did not deal with is insurance and Downtime. What is the point of scalability if you have downtime and what is the business continuity plan that maximizes available resources. Also, the post does not deal with success. What happens and what tolerance does the business and market have in the case of massive and rapid adoption. How do you deal with it?

Enter cloud computing and Amazon’s EC2. For those not familiar, EC2 is a cloud environment that provides virtualized hosting services. They provide the hardware infrastructure, the pipes, storage and other services. You provide the application. The promise is that you can scale the hardware need horizontally without having to deal with the hardware itself and its management and upkeep.

The first question is whether I believe it is 100% ready for prime time. You can argue that loads of companies are using it successfully, thus, it is ready. I have talked to some of them to mixed reviews. You can argue that some of the unconfirmed rumors are to be believed because there are indications of truths, thus it is not ready.  Also, I have talked to some people that were not all that happy with EC2. So on and so forth.

The second question is whether it matters or not if it is 100% ready for prime time. And  on the hills of this question, can it be used as a business continuity tool. I will answer both below.

The obvious third is regarding cost. Through all my calculations (and other people’s), EC2 can be more expensive than running your own systems – of course at some external data center. But some of the advantages come around quick adaptability, separation of concerns, system automation and self healing procedures. I will go into more details on this later as well.

Let’s start with the first question: In my opinion EC2 is not 100% ready for prime time. It is a subjective opinion based on my findings and my level of comfort. Part of the decision is based on cost, but mostly on technical merit:

  • Full virtualization is not where it needs to be; although there are ways to set up virtualization in the right configuration to make it not only more stable but also better performing. Not knowing EXACTLY how EC2’s virtualization layer works (and I am assuming virtualization) creates a big question mark on how things will truly stand up to friction. For example, it is hard to optimize a virtual machine to run DB servers  that deals with millions of queries a day. Hardware optimization is important with relational DBs.
  • Virtual NICs have sort-comings. They collapse under high traffic. The way to overcome this “limitation” is by attaching each virtual NIC with a physical NIC. However, this defeats the purpose of virtualization and limits, the theoretical unlimited number of VMs you can have running on a single server (only as many as you have physical NICs minus 1; you need one NIC for the host Operating System.)
  • Let’s not forget performance.  Even though you can create a limitless amount of VMs, the performance of each VM degrades with the provisioning of each new VM on a single server. What I do not know, however, is if there is an optimal number of VMs. In other words, is there a hard limit where before reaching that limit each VM would not change its  performance characteristics regardless of number of active VMs? Not too long ago I ran a virtualized farm. Unfortunately the application I inherited was so horrible that it superseded all problems we had with the environment. So, I can not even begin to answer the last question. Needless to say that the application and environment were replaced.
  • But it is not just the DBs that need “specially” optimized hardware. Application servers as well. Maybe not as specialized but a slow processor creates drag. And adding many VMs to spread the load creates more management and more moving parts adding to the risk management factor and what can go wrong.

Continuing answering questions … YES!!! It does 100% matter that they are not ready for prime time. But really, what we need to ask is the degree of how much it matters. How far is EC2 from being 100% ready? I do not know, but they look darn close. By adding granularity to the question we come up with multiple degrees of “how much it matters”. 100%?, 90%?, etc. In the case of EC2, I think it matters less than 20%. They seem that close to being ready – by my definition.

We can define cloud computing in many ways, however, let define it by a behavior: it needs to work like the electric company. Using Bob’s analogy, we do not really know how many generators the electric company has. We just know that we want/need more juice, we plug to the wall and we get more juice. The more juice we use, the more we pay. In the case of EC2, it seems to work that if you need more capacity, you provision a new “machine” and off you go – well, sort of 😉 This creates the idea that if you need more juice, plug to the wall and pay at the end what you consume. Not considering cost, it looks like an attractive proposition. But more importantly, think in terms of what it can do for you. Almost instant scalability when you need it and how you needed it.

A little digression …

I do not worry anymore about scalable systems. I know how to build them; I have come up with a methodology and an architecture philosophy and I have repeated the  implementation of the methodology and architecture philosophy with great success. However, while my architectures scale horizontally without much of a inconvenience, the problem of scalability has become an issue of “need” predictability and time for procurement. Now in English: How much traffic will I get and how long does it take to get the hardware and deploy it – I consider real estate and power procurement as part of deploying the hardware.

Over the course of my experience I found that I need 3 running months to predict needs 3 months ahead. I have reduce the problem of CAPEX planning to getting right the initial installation. This initial installation needs to have “enough” capacity to support 3 months of capacity needs. But … what will be the capacity needs on the first three months? On a web based system, it is somewhat unpredictable. Sure, we could plan marketing campaigns designed to “limit” traffic. However, why would you limit and control traffic – there are a great deal of arguments in this area – if you have the potential of being ultra successful.

There is also the argument of cash flow and spending the right amounts of cash on your infrastructure. Funding is a resource and needs to be maximized. Any hardware that is bought today that is not used and needed – Software as well, but to a lesser extent – depreciates and for less cash you can buy something better in the future when the resource is truly needed. Therefore, the initial deployment of hardware becomes not only critical from a capacity point of view but also from a “capital resource” point of view. This is not to suggest, however, that you should not deploy for capacity needs earlier. In other words, stay ahead of the curve. Deploy 3 to 2 months earlier than  needed. What I am suggesting is that you do not need to deploy hardware beyond 3 months or more.

Back to EC2 …

EC2 not being 100% ready creates a problem compounded by the fact that it seems to work and it seems a short ways away from being the real deal. I resolved the problem by thinking, with Bob’s help, of EC2 as an insurance policy and a business continuity plan: I will build my staging environment on EC2, even multiple staging environments.

Let’s define a staging environment as a facsimile of the production environment but scaled down. The facsimile, if at all possible, must contain ALL components.

How to set up an insurance policy and business continuity plan using “the cloud”.

First, let’s look at process and environments. I advocate and implement total separation of environments as part of my Software Development Methodologies. Developers work on their workstation and QA Engineering occurs in isolated environments that in some way represent as accurate as possible production. Staging is the environment where UAT (User Acceptance Testing) occurs and where the build is certified and readied to release. Once it is certified, it is released to production. Staging must be not as accurate as possible, but precisely a facsimile of production. By hosting the staging environment on EC2 – or any such cloud environment for that matter – you can have that precise facsimile at a small cost.

Let’s consider the case of wild success and the fact that it is hard to predict and the capacity needed to “potentiate” success. In this argument I will equate “success” to a “disaster” and how we not only recover from it but also ensure continuity:

If traffic spikes past available capacity, not only does the user experience degrades but  it disappears altogether. In this case, virtually a “disaster” happened since the service becomes unavailable. In this particular disaster, having the right amount of hardware would have prevented it; as we discussed above, however, this is not always easy to determine. Just like in any disaster, the speed of recovery is vital to the continuation and success of the company.  If staging is indeed 100% a scaled down facsimile of production, then on an environment like EC2 scaling up in order to provide “capacity” should be a matter of minutes to just hours and not days. Basically, enough tolerance for the business not to experience a catastrophic downtime. Temporarily moving the production environment from self managed to EC2 provides the company with the necessary time to build out, and potentially better plan, capacity on its facility. Once the “disaster” passes, production can then be moved back from EC2.

In order for this temporary migration to happen seamlessly and effectively a high degree of automation needs to be incorporated into the overall infrastructure from day one. While the last updated staging environment (there can be multiple) will have the latest code and basic configuration, its data will be not current or accurate. Data migration needs to happen on a regular basis, and all staging environments should have, based on the installed release, the latest data set. Not only the data updates must happen automatically, but the discipline of automation, from a “disaster” detection to recovery must be as automated as possible. Once an issue is detected, a single script needs to be run to get the new production environment ready for operations, including needed changes on DNS, load balancing and firewalls. Furthermore, provisioning and  de-provisioning new VMs should also happen as automaticly as possible based on capacity needs.

The last part of this EC2 consideration is cost. It is more expensive than it looks. Once you start racking up the VMs on a per hour basis, racking up traffic at a premium cost and racking up storage, the $0.10 to $0.40 price ranges start to add up. This is cost that you incur every month and that you can not “lease”. So, does it add up to more than what it would cost you to build it and manage it yourself? No, but the costs are comparable, at least in my calculations. Therefore, running on EC2 for 1 to 3 months, even though duplicates the expense for that timeframe, it does not, in theory, break the bank and provides insurance, albeit, at a premium cost.

I have some strong opinions on how technology should be implemented. I do not care to know the secret sauce, but I do want to know in more detail than just general terms how things work. Especially if I am going to bet my company on a platform. The unknowns, the uncertainties based on lack of SLAs and the assumption around virtualization make me a tensed CTO. The result: Not 100% ready and trustworthy to build an company on it. I admit, however, that it is very impressive what they have accomplished, it makes sense, and of the other commercially viable cloud environments (I am not including Google, Yahoo! and MS), EC2 is the only one that, again in my opinion, is worth considering and ultimately using; whether it is for production, or as in my case, as an insurance policy to support unpredicted growth and create a conscientious business continuity plan. With time and maturity, EC2 is a strong solution.


Filed under Business, Technology, Thoughts

Ducks, Rows, Lines And Business Processes

ducks in a row

Image courtesy of

I like my ducks in a row. Oh yes .. I do indeed. Every time my ducks get out of alignment I react, to some extent, poorly. This is particularly true as I help build companies through technology. Technology is just a business tool and even though it may take center stage as the enabler of a business, it is not the business itself. But that is no excuse to bypass technology best practices.

A word on best practices:

Most people take best practices as a recipe; as a cookbook; as a road to follow. To me best practices is a set of tools that I can use to accomplish particular tasks. There is no particular guide to the practices but the practices themselves as I adapt them to my needs. The same goes for development and project management methodologies. I only adhere to my own. Each problem is different and requires adaptations. It is ridiculous to think that one size can fit all; especially when each task is in the context of varied corporate cultures, projects and business needs.

Back to my ducks …

The whole thing starts with picking the first duck and placing it at the beginning. Then I pick another duck and I scurry to some supposed end and place the “last” duck there. This duck represents where the company may be in a distant future. Call it 5 to 10 years out. It is 100% my conjecture and based on my personal vision of where the business will be in “a period of time”. I based this vision on discussions I have with other stake holders. Will it go there? Who knows. I just like to think of the possibilities and have something to aim at. Does it matter if it does or not? Not at all. The company will experience changes based on the market. The business will go where the market takes it.

Third, I once again scurry around looking for another duck – the right one too – and I place it following the first duck. I turn around, look at the “last” duck and line the first with the second with the last.

It is time for another duck. I rush to find yet another duck, rush to the front and place it, all neatly lined up with the first, second and “last” duck. I go find another duck and I go back to the fourth position. I look back to the “last” duck, I look at the row in front; I look back just to make sure … and … the “last” duck is gone. I mean, nowhere to be found. This is not a real duck; it is my duck. How can it have flown away? Or walked?

I drop the duck in my hand somewhere in position and run back to find the last duck. I look around … I look around … I look around and I finally find it. There it is. But it is not where it is supposed to be. I pick it up and try to figure out where it really belongs. Undoubtably, since it moved, it does not go back to its original place. I figure out its new placement, most likely based on changed assumptions and market forces. And, I have to go to the front and quickly rearrange all the other ducks and align them with the “last” duck. This process happens again and again.

It does not bother me that the “last” duck moved – as a matter of fact I assumed from the beginning that it WILL move; what truly bothers me is that nobody told me before it got moved and then I am expected to auto-magically aligned the other ducks. If the duck had gone “quack quack”, then by just listening I could have quickly rearrange the other ducks on the fly. But these ducks are quiet. They do not make a peep, especially as they are being moved. Or maybe they are being forced to move pointed by a gun under the threat of death if they “quack” ? 😉

I see building companies very much as a process of putting ducks in a row. True, they do not need to be in a perfect line, but the row should have no gaps. The gaps are potential black holes that can drag the whole business into oblivion. Let’s be clear: gaps does not mean not having answers to all the questions. Many of the business or technology questions are answered as we lay down the ducks. Gaps means skipping the full understanding of basic elements in the business. In manufacturing it can mean skipping quality in the automation. In software development, not respecting a project plan. In business development, not having an out in a business relationship. In business in general, not having a solid strategy and not continually contesting it, revising it and analyzing potential risks factors.

Often I am asked “how can you know where the business will be?” As I stated above, I do not know. But I do imagine what the possibilities can be. It is not that hard to look up and try to take a leap of imagination and visualize where the business can be in 3, 5 or even 10 year. It is a dream. It is pure imagination. It is not real. It is a VISION. It is also a goal to aim for and a way to reverse engineer a road-map. Will the business end up there? Most likely not. Most likely it will take detours, it will change and morph, it will reinvent itself. It will struggle to survive (not necessarily in financial terms.) The market dictates where a business goes. And my ducks are witnesses to the detours and changes.

Regardless of the market, the vision needs to be there at the beginning. And the vision needs to adapt to the market. A business starts with imagining an idea. It continues with the fantasy of success. And follows the excitement of victory. In other words: THE VISION. Not vision as in a corporate statement – The Vision and Mission, those are important and necessary because they are internal call to arms and good external communications tools. But the vision as a quest to conquer some uncharted land or defeat some mortal enemy. Will the vision change? Absolutely. The change is what keeps things interesting.

I am a technologist and see technology as a business process, not as an esoteric pursuit of technicality. Indeed, the better the technical solution I come up with, the better I feel and I always strive to produce great technology including novel work when possible; however, as a function of creating value for the company and not for technology sake. I very strongly believe that in the end, if the technology does not answer the business need, for as good and revolutionary it may be, it is worthless.

My ducks, in the end, are just steps in a process to lead an important part of the business to success. Technology is an equal partner to all of the business units. It is normally considered a cost center, but it really is a revenue generator and through automation and operational efficiencies, a direct profit center.

1 Comment

Filed under Business, Technology, Thoughts