Problem Solving: The Socratic Paradox as a prerequisite for learning and a counter to the Dunning-Kruger effect

If there’s one thing that being a Trekker (Star Trek connoisseur) has taught me, it’s that successful leaders do not necessarily have to be the most knowledgeable people. They are, however, some of the most resourceful people. The challenge is that we all have different ways in going about being resourceful and problem solving. Captain Kirk relied on Spock…Captain Piccard on Data…and Captain Sisko on Dax; today, we rely mostly on “Google”…whatever that means for you. Familiarity with the aforementioned characters is not important but you (hopefully) get my point. So, if the end goal of problem solving is to obtain and transfer knowledge, how does one define “resourcefulness”? Does one’s approach matter?

It’s not news that building expertise in anything (regardless of scope size) is a time-consuming activity, simply because scope of knowledge is arguably infinite. Within large-scale work projects, one is typically exposed to unfamiliar areas and so it makes sense to kick one’s resourcefulness into high gear to compensate for a lack of expertise. Leveraging the work of others from “Google” or elsewhere is a way to exhibit your resourcefulness but this should not be the modus operandi in acquiring knowledge and problem solving, specifically in a “copy and paste” manner.

Relying solely on applying the solutions from “Google” to solve your team’s problems might work sometimes but it can also have unintended outcomes. To elaborate, the Dunning-Kruger effect is defined simply as: the less you know, the more you think you know. A negative outcome of this effect is missing out on the obvious: that other solutions (optimal or not) also and may already exist. When you find a tried and true solution from anywhere, take the time to think critically on how the solution may not be a fit for your team and project’s goals…otherwise you will miss a learning opportunity and/or, worst case scenario, display a disconnect with your team and the context of the problem(s) you are trying to solve.

For me, step 1 of being resourceful is acknowledging that I do not know.

“I know that I do not know” is not a celebration of ignorance, but rather a realization that absolute certainty on anything is not a given…or, as Socrates would suggest in the “Apology of Socrates”: one cannot know anything with absolute certainty but can feel confident about certain things [that I do not know]”. The intent of this acknowledgment is not to humble myself but to be true to myself and my team, given my non-expert knowledge.

Problem solving, in my opinion, is not a self-serving exercise but an opportunity to add value to my team by challenging any form of myopic thinking as supposed to potentially bringing in the myopic views from elsewhere and falsely repackaging it as a “best practice”.

Anyway, back to the Trek universe…I wish you all much Qapla (pronounced Kap-la!) on your professional growth!

One question that your business leader should be able to answer concretely

Management gets all the love when business is good; so, its only fair they get all the blame when things go bad. This pressure varies per industry.

Business leaders of companies that profit off technical intellectual property have the most at stake due to the always evolving and highly competitive technology industry. Thus, to set these leaders apart, make sure to pay attention to how a leader answers the following question:

What are you doing to ensure your (our) technology isn’t displaced?

“We are investing in more R&D”, is not an adequate answer.

For an external audience, that answer may suffice; however, internal employees should never accept this from their leaders or risk their company or technology becoming obsolete.

Answering that question properly requires long term vision, which is why increasing profit via cost out is usually the go-to for many business leaders. Vision is incredibly difficult and some investors are not patient enough for “long term vision”. Regardless of the justification for a cost out strategy, one thing remains true: innovation is technology-driven, not finance-driven. When finance is the primary means for innovation, subpoenas and broken hearts usually follow suit. By finance-driven, I’m not only referring to cost-out; I am also referring to the false mindset that innovation equates to doing things “cheaper and better” instead of a focus on the long term implications.

Think about the highly competitive retail industry and how Walmart was able to differentiate itself from its peers. By thinking outside of the box, Walmart bet big on its distribution centers and supply chain to improve efficiency and scale, which eventually placed the company in a position where no other peer can effectively compete in. They are not a technology company but they are technology-driven.

I’ll use Figure 1 to revisit the question: What are you doing to ensure your (our) technology isn’t displaced?

        techarticle   Figure 1 Performance and Price

In Figure 1, there are three different products sold by three different companies. Product A is a low-end piece of technology, while Products B and C are mid-market and high-end items, respectively. The challenge all three products face is increasing performance at their current price levels or cheaper, but this is difficult. An unlikely route all three products will take is increasing price at the same performance level. What will typically occur is that each product will try to expand into a competitor’s space to steal market share, either via a product downgrade or a new product line. This type of scenario puts Product B in a tough position because Products A and C will most likely creep into Product B’s wider mid-market space to expand their markets, thus locking Product B on the performance and price curve. If Product B is unable to move north, it will get displaced.

For an example, let’s imagine that there is a negative correlation between engine heat and engine performance; let’s also imagine that an engine auto-interprets heat sensor data into its performance algorithm, real-time; thus, the more precise the senor in the engine, the better the engine’s performance. If Products A, B, and C are engines with varying preciseness in their heat sensors (hence, varying performances), then each company might become occupied with improving its heat sensing preciseness at a lower cost or creep into a competitor’s market. But is this the right type of competition? Why not instead focus on reducing or eliminating the engine’s heating problem, which requires more investments in R&D and potentially higher long term profits?

There are three aspects of technology that all technology leaders should have experience in as a foundation for becoming technology visionaries: the development, application, and commercialization of technology. Of the three, commercialization requires the most business acumen and it emphasizes that numbers are the way to encourage growth (e.g. lower cost is better). There is nothing wrong with this mindset; however, leaders with only experience in commercialization are simply figureheads with a strong appetite for numbers…delegators with an over reliance on the expertise of others.

 

Article is inspired by a conversation with one of the smartest people I know, Chris Keimel.

Interesting Analogy of Group Behavior

As of November 4, 2013, the Congressional Job Approval is hovering around 9%.

Why…you ask? Well, here’s an interesting analogy I came across:

“If you  start with a cage containing five monkeys, and inside the cage hangs a banana  on a string from the top, and then you place a set of stairs under the banana,  before long a monkey will go to the stairs and climb toward the  banana.

As soon  as he touches the stairs, you spray all the monkeys with cold  water.

After a  while another monkey makes an attempt with same result — all the monkeys are  sprayed with cold water. Pretty soon when another monkey tries to climb the  stairs, the other monkeys will try to prevent it.

Now, put  the cold water away.

Remove  one monkey from the cage and replace it with a new one. The new monkey sees  the banana and attempts to climb the stairs. To his shock, all of the other  monkeys beat the crap out of him.

After  another attempt and attack, he knows that if he tries to climb the stairs he  will be assaulted.

Next,  remove another of the original five monkeys, replacing it with a new  one.

The  newcomer goes to the stairs and is attacked. The previous newcomer takes part  in the punishment — with enthusiasm — because he is now part of the  “team.”

Then,  replace a third original monkey with a new one, followed by the fourth, then  the fifth. Every time the newest monkey takes to the stairs, he is  attacked.

Now, the  monkeys that are beating him up have no idea why they were not permitted to  climb the stairs. Neither do they know why they are participating in the  beating of the newest monkey.

Finally,  having replaced all of the original monkeys, none of the remaining monkeys  will have ever been sprayed with cold water. Nevertheless, not one of the  monkeys will try to climb the stairway for the banana.

Why, you  ask? Because in their minds, that is the way it has always  been!

This, my friends, is how the House and  Senate operates… and this is why, from time to  time:

ALL of the monkeys need to be REPLACED  AT THE SAME TIME !”

I’ll have some Analytics please; Hold the Predictive

editorial2

What you see above is the alleged equation that brought down the subprime mortgage and collateralized debt obligations (CDO) markets during the recent financial crisis. It was predictive analytics at its best for the financial markets and the model created a lot of upside in the markets. But, in retrospect, where was the statistics regarding the potential downsides of false positives and discrepancies that the model was capable of? Even today, regarding GE’s very own Industrial Internet blackbox, we’ve all heard how much of an economic impact that a minute increase in efficiency will bring about but where is the potential downside statistic for when predictive algorithms fail? Anyway, I digress. Wired.com provides a simple explanation of the above equation; but, more simply put, the above formula is looking for the probability of an event occurring in two different things (T’s) given the behavior of two independent entities (F’s) as measured by a correlation figure (the gamma at the end). This logic is attractive within simpler contexts, but in a Big Data environment like the financial markets involving complex interactions among assets and entities, it was too dumbed down…all thanks to the pursuit of achieving predictive analytics.

Outside the context of Big Data, predictive analytics is nothing new; society benefits from it every day via actuarial sciences, credit scores, fraud detection, and etc…all thanks to an abundance of static historical data and heuristics. However even in this smaller scale context, predictive analytics is still not fool proof. For example, a CRM software might easily tell a salesperson to avoid marketing new products to a particular customer due to that customer’s high decline rate (let’s say 98%) of new products within the first year of market introduction. But we all know that there are “hard to predict” factors unbeknownst to the salesperson or CRM that can cause such a customer to eventually say “yes” and result in a missed opportunity that could have long lasting effects.

Having simple, but intuitive analytics and key performance indicators (KPIs) for one’s data is one thing but “predictivity” is another more opaque concept within IT like quantum physics is to physics. We may think we understand it as a cohort but we really don’t. It’s understandable why we embrace and pay attention to predictive analytics (especially within the context of Big Data) so readily; the stats are impressive: “85% of organizations already have Big Data initiatives planned or in progress”, “70% state that they plan to hire data scientists but they are finding that there is no reliable source of new talent in this category” [source: NewVantage Partners 2012], by 2020 the world’s data volume will be 44 times the size of 2009′s scale [McKinsey 2011], and etc. The potential upside is huge within tech so we can see why predictive analytics is here to stay. Imagine the sort of race to the top that the above equation encouraged within Finance and the economic prosperity that ensued. Unfortunately, as we know, 2008 happened; does the future of predictive analytics within the “Internet of Things” have the same fate? After all, aren’t predictive algorithms highly influenced by generalized assumptions and a human view of the future? Can complex interactions between highly interconnected systems be put into an equation to predict their future state? If yes, is the potential downside risk of a failure even worth it? Maybe it’s time we return back to the KISS principle when it comes to dealing with data. Statistics already frowns on big data…so why try to predict the future with it.

New Android App: KWOG SOS

Description

The banks have it. Retail stores have it. Why don’t you? I’m talking about silent alarms.
With KWOG SOS you can have your very own silent alarm to outsource requesting for help when immediately requesting for help yourself is not convenient.

For example, if you find yourself held up in a predicament (e.g. robbery, bank hold-up) and unable to call 911 or for help, a simple double tap of this widget will send a distress signal and your GPS location to your identified emergency contacts who can then assist in getting you help.

Version 1 features:
-Widget application SOS icon (does not align to standard apps in home screen for visual ease of access)
-Notifies contacts within a preset emergency contacts list with a distress SMS and your current GPS location
-Automatically sends location updates as location of your mobile device changes

What Do Great Explainers Have in Common?

2013, a year declared by the United Nations as the “Year of the Quinoa” and one in which the world’s oldest bank, Monte dei Paschi di Siena, almost ended its 540-year history, is off to a rather eventful start for me and I’m sure many of you. My New Year’s resolution for 2013: work on and improve my explanation skills.

We’ve all experienced the dreaded blank stares or awkward silence when communicating a new idea or giving a presentation or have been on the receiving end trying to just “get through” someone’s pitch. No one wins in both situations and important information sometimes gets missed.

So, what do great explainers have in common? Empathy. That’s according to author Lee LeFever in his book, “The Art of Explanation”. The act of explaining is as natural to us as running but as with running, there are mechanics that must be learned to make it all worthwhile and smooth. I recently took up the book and although some points are obvious (so obvious that I won’t provide examples), the author does provide a fresh perspective on “explanation”. Speaking on the lack of empathy problem, LeFever talks about the “curse of knowledge” that many of us suffer from, which is when we know a subject so well that we can’t imagine what it’s like to not know it. The analogy the author uses would be a person tapping on a table to a song in his/her head and expecting someone else to know what song he/she is thinking of just by the table taps.

The author also introduces the idea of using an “explanation scale” model when it comes time to planning, packaging, and presenting an idea or a topic; I will provide some high level examples of the model later, but I recommend reading the book for more details. Using the explanation scale, LeFever explains the difference between looking smart vs. making others feel smart and more importantly, looking smart within our bubbles/circles vs. making others, who live outside our bubbles/circles, feel smart.

LeFever’s Explanation Scale (simplified!)

First, picture your audience member(s) as being on some “understanding” scale from A-Z or 1-5, etc. The goal is to move your audience from their current position on the scale to whatever your target is on the scale, which is typically where you and your level of understanding reside. In other words, how do you get the non-geeks on the same page as you and your team, the geeks.

Explain1

Explain7

While many of us understand the above concept of “knowing your audience”, what’s challenging is creating an explanation that a diverse audience will appreciate. Some of us focus on pleasing the experts in the room,

Explain4

while others make no assumptions of the audience and start from the basics before going into the details.

Explain5

It is highly improbable that a presenter can make everyone in the room happy if the audience is very diverse, but LeFever offers the following tips that can be used when framing a presentation.

Explain6

In the above diagram, there are elements of explanations that work best for various levels of understanding. Each element has its own technical definition, which LeFever provides to the reader. According to LeFever, It’s important that each element is packaged as a standalone and not intertwined at random points along the scale. “Connections” and “Stories” often complement each other or can be used interchangeably. Knowing when to use one over the other or which one to use first is key. For example, imagine you had the idea for Groupon and you had to pitch it to some investors. Would you explain the concept of Groupon by starting with a story of how it would work or by first making the connection that it is a new way of looking at something old: coupons?

These visuals below help get LeFever’s points across on how to approach various people along your explanation scale.

Explain2

Explain3

Imagine how different an organization, amidst a world of information overload, would operate if it had a great explanation culture. So how do you think others perceive you as an explainer?

It’s important to note that presenting and explaining are not viewed as being the same.

Greetings from China!

I am currently in China on business travel from Feb 14-March 13 but that’s no excuse for why I haven’t been able to commit to more blog posts this year. Anyway, this is  my fifth time in China, but first time for work. In the first picture, I am at the GE China office in Shanghai and in the second picture I am with my good friend, Matt Schone, at Morton’s Steakhouse’s Pearl Tower location.

At GE China pearl tower with Matt

Enterprise Singularity

As enterprises push for more collaboration among employees and invest in social networking tools to enable this, what they cannot ignore is the power of 1 or what I like to call “enterprise singularity”…I will elaborate later. If companies truly want to connect employees with the right resources and people, their social network strategies cannot model those of popular tools like Facebook or Twitter. Both tools rely on agent-to-agent similarities that tell a user whom to “follow” or “add”, but this can create homophilous and sparse networks within an enterprise that neither optimizes resource sharing nor encourages diverse involvement among employees; they can also encourage exclusivity, where some agents ultimately want to have more followers than followings as in the Twitter model.

Earlier when I mentioned the power of 1 within enterprise social networks, I was referring to a network density of 1. In other words, all possible agent-to-agent (i.e. employee-to-employee) connections within an enterprise social network having already been made without the need for employees to manually “follow” or “add” each other or be concerned about unreciprocated “follows”. There are two possible scenarios I define this network density of 1 for: Company-wide or Function-wide (e.g. Finance, IT). In the former, an innovative content filtering system would be mandatory to aid in handling the data deluge. A benefit of this Company-wide approach is greater collective input and output across Functions. After all, just because an individual works within one Function (e.g. IT) does not mean he or she does not have a high passion or input potential towards another Function (e.g. Finance). In the “Function-wide” scenario, the same “collective” benefit applies but more so within Functions.

Even if companies follow the traditional social network model, it will only be a matter of time before the company’s network density gains the right foundation that will enable the goal of 1 or enterprise singularity. Typically, with the term “singularity’, people are often reminded of concepts like groupthink, technology-aided super-intelligence, Captain Piccard and the Borg, and etc.; but, when I use the term “singularity” in an enterprise (no pun intended) sense, I am referring to an enterprise in which all (or most) employees are always “in the know” without much effort required, which only a high density or the number 1 can enable. In the traditional model, enterprise singularity will only occur via employees that rank high in the following key measures: “eigenvector”, “degree centrality”, “closeness”, and “betweenness”. To briefly define each measure in layman’s terms, eigenvector determines how well-connected an agent is to all other well-connected agents; degree centrality determines how many direct connections an agent has in the network; closeness measures the speed at which an agent can reach everyone in the network; lastly, betweenness determines the likelihood an agent will be used as a middleman for communication between any two agents in the network.

Yes, real relationships are formed offline in the real-world and sometimes replicated online; but, then again, many meaningful relationships do start online and are replicated offline in the real-world. If companies truly want to differentiate their social networks from other collaborative social networks, they should save their employees some time and give them the number 1, at least Function-wide.  A density of 1 that is Company-wide would create a data filtering nightmare for a large enterprise, but with the right filtering mechanisms in place, a Function-wide density of 1 is not too farfetched. 1, after all, may be that “fountain of youth” that many startup companies fuel off of…how can big enterprises adopt that same vitality?

3Dsmithing

With 3D desktop printers trending below the $500 price range and a growing library of 3D files readily available online, I may soon be in the market for a 3D printer. For those unfamiliar with 3D printing, it is, in short, the realization of something called additive manufacturing. As an analogy, think of wood sculpting. The artist begins with a block of wood and proceeds to chip away at the wood to “subtract” from the wood, resulting in wasted wood chips. In contrast, with additive manufacturing, the artist prints out his or her desired outcome by simply entering the specifications of a 3D design into a computer program. The applications of this technology are endless: from misplaced game board pieces to prescription drugs to heavy machinery parts; soon enough, parts suppliers and consumer goods stores will be running out of business.

But what are the implications of widespread 3D printer use? Project Wiki Weapon offers a glimpse of the not so distant future and the capabilities of the technology. The project’s plans are threefold: 1) “Develop a fully printable 3D gun”, 2) “Adapt the design so it can be printed on less expensive 3D printers-without compromising safety”, and 3) “Further embrace the “Wiki” root of the project and establish a printable gunsmithing commons”. Scary stuff. Project Wiki Weapon envisions a world where non-military grade weapons can be easily printed at home, with little to no government regulation, and used at the owner’s discretion. According to a Guardian article, Project Wiki Weapon recently completed its $20K funding goal and will soon be well underway to fueling more Second Amendment debates.

I respect what the Second Amendment stands for, but Project Wiki Weapon is not something an already uncertain and violent world will ever be ready for. One thing is for sure though, when I eventually get my own 3D printer, my first creation will not be a weapon. Instead, I am leaning more towards a model aircraft or something to reflect one of my many interests.

Gamification of the corporation: “All the world’s a stage…”

If Shakespeare’s world was one overrun with actors, then today’s is one overrun with gamers and status-seekers looking for the next great competition. Ever since we received our first stickers and “points” in primary school for good grades and/or good behavior, we have been obsessed with what is now widely known as gamification. The concept is simple: make everything competitive (with oneself and/or with others). Gartner Group defines gamification as “the concept of employing game mechanics to non-game activities” and they predict that by 2014, “more than 70 percent of global 2,000 organizations will have at least one ‘gamified’ application.” For U.S. corporations that are a little hesitant in adopting gamification in the workplace, the following statistic might be one to keep in mind: “71% of American workers are ‘not engaged’ or ‘actively disengaged’ in their work, meaning they are emotionally disconnected from their workplaces and are less likely to be productive” (Gallup, 2011).

Several scenarios and productivity applications can benefit from this concept; for example, if employees are given an incentive to perform unfavorable tasks in a timely manner, such as travel expense reporting, who knows what effect that will have on overall commitment to other tasks. Some of the integration ideas that companies should consider for their productivity applications include: badging, leaderboards, and the Stack Overflow model of Q&A voting.

Is your company embracing gamification or ready to do so?

Published!

Congrats to my research colleagues on our recent publication by Cambridge University Press in Operations Research. It has been a great year+ of revisions and waiting.

A hybrid genetic algorithm for the vehicle routing problem with three-dimensional loading constraints

Lixin Miao,

Qingfang Ruan,

Kevin Woghiren

and Qi Ruo (2012).

RAIRO – Operations Research, Volume 46,
Issue 01, January 2012 pp 63-82

http://journals.cambridge.org/action/displayAbstract?aid=8586204

BYID, yes! But is BYOD really worth it?

Security comes with a new face every year. The acceptance of security as a dynamic state is crucial for the protection of any enterprise and its assets. A famous philosopher once quipped, “It is in the nature of things that when one tries to avoid one danger, another is always encountered”. Let’s take a look at the infamous Stuxnet malware, for example; the malware was able to infiltrate Iran’s nuclear program within a network requiring rigorous security screenings including biometrics ID and no internet access. This begs the question: Is IT security better off by treating security as a game of perfect information where strategists should be valued and emphasized more over tools and skills, which the opposition is most likely equally matched with (think chess)? Threats to a company, after all, can both be internal and external.

With increasing popularity in initiatives like BYOD and BYID, IT departments are constantly trying to find the balance between openness and security. Bring Your Own Device (BYOD) seems to sit at one end of the spectrum and Bring Your own ID (BYID), on the other end. Both initiatives are part of a larger consumerization of IT trend that has been gripping the corporate environment since the advent of smart personal devices and cloud services. BYID may seem to pose a security threat, at face value, but it’s actually both convenient and provides a stronger security environment than a one tier authentication method. With the continued growth of cloud services, identity needs to be taken off of users’ plates via delegated authentication using such standards like OAuth and OpenID. Imagine a use case where you provide a service online to users either on a trial basis and/or full subscription. If a user only wants to use your service on a trial basis, he or she does not have to create a login to temporarily access your service but should be able to instead use a social network account, for example, to access a trial account. When it comes time to upgrade to a full subscription, the user will then have the option to create a login specific to your site unless you choose to continuously leverage a 3rd party vendor for authentication purposes. This type of authentication brokering should be embraced more by companies of all sizes and is even more applicable for business partnerships. It becomes a true partnership when two different businesses can use their respective credentials to access non-sensitive data on each other’s sites.

At the other end, BYOD seems like a good idea at face value but the openness that is achieved comes at a high cost to personal privacy and enables personal devices as easier entry points into a company. With BYOD, one of the biggest threats is phishing within an application with a good install base. It’s important to remember that when it comes to choosing mobile applications, there is no central vetting service and users have to rely on reviews and the “reputation” of developers. This is a serious threat to corporate networks. Although there are methods such as network access control (NAC) or virtualization that can help in protecting a company’s network from intrusion via personal devices, one big disadvantage is in the remote capabilities arena. For example, company-owned devices can be easily encrypted or wiped clean in the event of a lost or stolen device; but, with employee-owned devices, this policy poses a challenge and has far reaching ramifications into privacy. The language in many corporate end user agreements, regarding personal mobile devices, spells it out clear that personal data is indistinguishable from company data and can be audited or remotely deleted if there is ever a perceived or realized compromise to the company. Although storage is guaranteed in the event of a remote swipe, avoidance of personal data compromise does not seem to be.

Unlike other IT trends of the past such as outsourcing work to foreign countries, that can be more easily reversed, BYOD would be much harder to reverse if the initiative proves to be too expensive (i.e. storage costs of virtualization) or unsustainable. Is BYOD really worth the risk it poses to both employees and employers? As an employee, if you were to misplace your company-registered personal device, would you report it to Security immediately or wait until it turned up because you are trying to protect your personal data first?

Earth Day 2012

As a show of stewardship to the environment, I will be keeping a weekly photo journal of the tomato plant below (accessible via my Random Journals page on the right). Happy Earth Day 2012!

An Introduction to RSpec

By Guest Blogger: Max Woghiren, Google

RSpec is a testing framework for Ruby based on the notion of behavior-driven development. It’s designed to allow unit tests to be easily written in terms of behavior, and provides simple, intuitive documentation for the entities being tested. It’s a valuable tool that makes test- and behavior-driven development enjoyable and straightforward. Let’s check it out.

Reverse Polish Notation

Suppose we want to write a calculator. The calculator will operate using Reverse Polish notation. In Reverse Polish notation, operations come after operands; for example, 3 + 4 becomes 3 4 +. A calculator using this notation maintains a stack of numbers, and whenever an operation is entered, we pop the top two numbers from the stack, perform the operation, and push the result back onto the stack….

READ MORE here

Crowd-Sourced Libel

When I first heard about OpenLabel’s idea for an app, I initially thought out loud in solitude, “Not another soon-to-be-defunct barcode scanning app!” But as I read more, I realized that OpenLabel’s new app was a crowd-sourced solution designed to provide more transparency on products and brands. For example, not only would a barcode contain price information, but it would contain other data such as the environmental impact of the product and whatever other information that consumers wanted to share with society.

The idea isn’t original but the timing seems to be right as crowd-sourcing is becoming more commonplace. However, since OpenLabel will not be monitoring any of the user input, the potential defamation of brands is increased and could result in lawsuits against both the start-up and its user base. In addition, even if this app becomes successful and profitable, I do not think brand loyalty would succumb to its effects.

Year of the Euro

2011 is dedicated to the eurozone’s fortitude. Despite the surmounting pessimism surrounding the fate of the 17-nation area and a prediction from Credit Suisse’s Fixed Income Research team last month that “we seem to have entered the last days of the euro”, the eurozone is showing signs of a long term makeover more so than signs of impending failure.

Avi Tiomkin of Forbes Magazine quoted in a 2008 article, “It is only a matter of time, probably less than three years, until the euro experiment meets its end…Tensions between inflation-obsessed Germany and growth-hungry Latin countries will spell its end.” As rising inflation continued to plague the eurozone in 2008, comparable to today’s eurozone environment, Avi Tiomkin’s argument was that the “Latin” countries’ (France, Italy, and Spain) thirst for growth ran counter to their more inflation-wary counterparts in the German bloc (Austria, Luxembourg, the Netherlands). Although he makes a valid point for the demise of the euro, he ignores the fact that the much stronger German bloc has both the most to gain if the Euro survives and lose if the Euro fails. For example, Germany’s competitiveness and balance of payments have far outpaced those of its eurozone counterparts since the introduction of the Euro than if it were to have a stand-alone currency.

Talks of a eurozone bailout from other countries and the ECB, earlier this year, have since dissipated significantly due to the potential moral hazard and increased inflation risk they pose, respectively. Unlike the 1997 Asian “capital account” Crisis, global financial contagion, in the event of a eurozone member default, is more of a threat in the current European Debt Crisis due to the highly intertwined and indebted Western financial system. Raising capital via the debt markets has been and continues to be a challenge for eurozone members due to the likely exploitation of the Crisis by bond speculators. 2012 is no doubt crucial for the future of the eurozone, and as the ECB continues to lend cheaply to eurozone banks, risk exposure will only increase; however, default by a member state is no longer a viable option.

At the end of the crisis, many expect the complete dismantling of the monetary union, but I think a slimmer eurozone is more realistic with Portugal and Greece being the first victims. However, before this process can begin, borrowing costs must decrease as recently experienced during Italian bond auctions.

Dark Matters

The following are my “favorite” plausible end of world scenarios posted in a Guardian article by science correspondent, Alok Jha. To add to the “Gamma Rays From Space” scenario, our sun also emits gamma rays during solar flares; a big enough solar flare would do the trick as supposed to waiting for a nearby star to go supernova.

MEGA TSUNAMI

Geologists worry that a future volcanic eruption at La Palma in the Canary Islands might dislodge a chunk of rock twice the volume of the Isle of Man into the Atlantic Ocean, triggering waves a kilometre high that would move at the speed of a jumbo jet with catastrophic effects for the shores of the US, Europe, South America and Africa.

Danger sign: Half the world’s major cities are under water. All at once.

GEOMAGNETIC REVERSAL

The Earth’s magnetic field provides a shield against harmful radiation from our sun that could rip through DNA and overload the world’s electrical systems. Every so often, Earth’s north and south poles switch positions and, during the transition, the magnetic field will weaken or disappear for many years. The last known transition happened almost 780,000 years ago and it is likely to happen again.

Danger sign: Electronics stop working.

GAMMA RAYS FROM SPACE

When a super-massive star is in its dying moments, it shoots out two beams of high-energy gamma rays into space. If these were to hit Earth, the immense energy would tear apart the atmosphere’s air molecules and disintegrate the protective ozone layer.

Danger sign: The sky turns brown and all life on the surface slowly dies.

RUNAWAY BLACK HOLE

Black holes are the most powerful gravitational objects in the universe, capable of tearing Earth into its constituent atoms. Even within a billion miles, a black hole could knock Earth out of the solar system, leaving our planet wandering through deep space without a source of energy.

Danger sign: Increased asteroid activity; the seasons get really extreme.

 

Shakespearean Simians

French mathematician Emile Borel was one of the first few intellectuals to pose these questions (not in original form): How many monkeys would it take to successfully reproduce a work of Shakespeare (or any other literature) and how long would the process take? And, if infinite variables, what is the probability of success? The method: They are all typing randomly on 50-key standard typewriters.

To give the scale of the task, I will invoke some statistics and quotes from Seth Lloyd’s Programming the Universe. The following stats assume a 50-key standard typewriter. Ignoring capitalization, the probability of randomly typing ‘h’ is 1 in 50…typing ‘ha’ is 1 in 2500…typing ‘ham’ is 1 in 125,000…typing ‘hamlet. act i, scene i’ would take a magnitude of 10^-38 (approximately, “it would take a billion billion monkeys, each typing ten characters per second, for each of the roughly billion billion seconds since the universe began”).

A large number of experiments have been carried out to answer Emile Borel’s question using both real and virtual monkeys, but they have all, for the most part, failed or come to a stand-still. One of the latest researchers to try the experiment is Jesse Anderson, an American programmer. Equipped with the Hadoop programming tool and Amazon’s cloud, EC2, Mr. Anderson set out to create the virtual project in August and has recently reported a 99.990% completion rate of Shakespeare’s collections (~3,695,990 characters) using millions of virtual monkeys. How is this possible within such a short time period? Mr. Anderson’s success isn’t due to intelligent algorithms or a secret access to quantum computers, but because he has established very convenient constraints in his program. One constraint is the disregard for punctuation and spaces, while another is the production limit of 9-character text strings per monkey at each time interval. The latter constraint enables Mr. Anderson to sift through each produced text strings for characters that match those within Shakespeare’s collections, which explains his high success rate.

Without any constraints, ‘Borel’s’ project is nearly impossible to simulate using contemporary computers. Without the ubiquity of quantum computing, the answer to Emile Borel’s question will continuously be settled at ‘infinity’. It has been suggested by various computer scientists and physicists that it would be far easier for randomly typing monkeys to recreate computer programs, which are often shorter, less imaginative, and less coherent than literature, than to create masterpieces. So this begs the question: How many monkeys would it take to randomly write Jesse Anderson’s computer program and how long would it take?

Smarter cars, not “smarter apps”

Random Fact: There has been an average of 10 to 11 million motor vehicle accidents annually in the US since 2004 (Source: U.S. Census Bureau)

Another Random Fact: Only ~12% of a car’s energy use goes toward providing momentum/moving the passenger (Source: Hofstra)

While both facts/issues have garnered the interests of academia and industry folks, some of the solutions proposed thus far have been counter-intuitive, but expectantly diverse. Academia seems preoccupied with creating smart-phone applications and providing the driver with more responsibilities and distractions, while on the other hand, private enterprises are leaning more toward car automation and yielding less responsibilities to the driver. In a not-too-recent article, UC Berkeley and IBM announced plans of a partnership to create a smart-phone app that would be the equivalent of a “prediction” model for daily traffic in order to combat congestion and fuel inefficiency, given a driver’s GPS data history. And on the East Coast, researchers at MIT and Princeton were reported to having developed a smart-phone traffic app that provides real-time traffic signals in advance to drivers for the sake of improving fuel efficiency. The catch: its a crowd-sourcing app that relies on high traffic activity in order to be effective.

The private enterprise approach has been the more costlier model but, in the long run, it proves to be more effective in reducing motor vehicle accidents and improving fuel efficiency. I remember, a while back, reading an article about a Google project aimed at fully automating the car driving experience. Although the project is still far from being market-ready, the effort is definitely a step forward. However, with that being said, I do not think a human driver will ever be fully replaced when it comes to the ubiquitous automobile given the multitude of changing environmental variables on any given route and on any given day that a computer may not simply be capable of accurately assessing 100% each time. I liken the scenario to GPS-guided smart bombs that can become error-prone due to electronic noise. In addition to its autonomous vehicle project, Google has recently partnered with Ford Motors on a project similar to UC Berkeley and IBM’s “prediction model”. The only difference is that the Google and Ford project would integrate a car’s computer with the cloud, thus providing the car with real-time decision-making abilities, instead of relying on a smart-phone app.

In my opinion, the private enterprise is correct in focusing on further automating the car driving experience.