While designing a social web site
and being annoyed by all the bickering social media (share me!, no share me!), I was distracted and came up with a new way to understand business assets
. Some people would call it Web 2.0
or social marketing but to me it’s an evolution of web site or enterprise level architecture assets no matter what buzzword you like. The public (the internet) and the enterprise are thinking along the same lines. Business has always used the same types of assets, but without computers, we didn’t have much change unless you scaled up. It’s still the same value creation process whether supplemented by technology or not.
We use technology to process information both for customer facing applications and as an internal set of applications. The public facing internet has different goals than the privately controlled enterprise networks and that has been the source of contention in the adoption of social media marketing in business I believe. The operational tension of trying to adapt a technology for the customer while maintaining control and stability of an internal system has caused some traditional IT system thinking people to ban social network access entirely. But I’m getting ahead of myself.
Certainly, what I am breaking out here is not going to be a Web 3.0 design because there are people who talk about all these pieces already, but the structure that I’m putting it into is unique I think. I’d like to have any comments where you think I just reinvented the wheel of course. For me, it’s a new set of ideas that came out of grasping SOA
concepts and seeing ITIL deal with our social channels
from a high enough level that it didn’t matter whose definition of any 1.0, 2.0, or 3.0 term was right. The progression of technology assisted asset complexity is as follows:
- local information only
- local information and applications
- shared info
- shared applications
- shared info and applications
Information and applications
Web sites were originally created to distribute files of data. The idea of the web was that of a giant file server that everyone could access. This took the place of the employee who wrote down or handed a pre-printed piece of information for every customer who walked in the business. Now it was publicly available and employees could direct a customer there.
The employee generally wanted to capture and record some customer information to process a transaction so they asked questions, made phone calls, and consulted managers to take care of the sales process. The employee also might want to automate how they did surveys and updated information to display. These types of automated processes were placed on a web site also.
The two types of assets, information and applications, have been at the core of the value transfer system
of business before it was automated when it known more as knowledge and process. Now our automation of the two is maturing at a level on a large customer scale. And startups are first out of the gate to take advantage of lowered costs of entry for these assets. Other asset types in ITIL terms such as capital, infrastructure, organization, management and people have stayed more stable but have had to adapt as the needs to integrate with information and applications have changed. Entrepreneurs who understand customer development and lean startup ideas from Steve Blank
and Eric Ries
have the upper hand here. Their management concepts are based around information flow on the customer side and a more efficient process.
Local info only
The first telephone call was a shared piece of data. The first web site was a shared piece of data.. But we changed our common communications model from a one-to-one to a one-to-many through the use of technology. That made the barrier to entry financially lower for distributing information. Inside the enterprise, the same shift happened when a corporate database was built allowing any employee access to company information.
If you as a customer wanted to know something about the company that was stored in the database, you had to use an employee to allow you access. In a small business, they might look up the information in a Word or Excel file. The web site became a substitute for that employee as distributor of that information and that customer inquiry process became automated
Local info and applications
Programmers very quickly began exploring how they could add process or logic into the data that was on the web. SSI, SHTML, CGI and other basic ways to augment the display of data were developed. People today, still create these basic types of sites first full of data, and then progressively adding e-mail and lead acquisition functions. But the quantity of web sites now available have made the usefulness of these walled gardens less interesting unless they develop a massive amount of data and make it available through a great interface.
The corporate DBAs found that by adding stored procedures, it allowed them to better manage their data. COBOL and other languages were designed with database access in mind and database management systems
and created languages to allow better access and management of their data. Now in the enterprise, the systems have grown into data directly accessible to the employee called content management systems
(CMSs) and the data systems that manage more structured data
requiring applications to make sense out it all.
Networks allow for communication and the idea that data didn’t have to be all in one place created some advantages but created severe problems as well. For the corporate data schema, it was generally better to keep data in one place under a central control but the distributed database system evolved as businesses found efficiencies to take advantage of.
It was much easier for the web site to link to another piece of data on another web site creating the distributed database called the world-wide web
. But without a central control, it lacked usefulness as information increased. Google provided a simple interface to the web but without meaning. People are still trying to turn the web into a giant library complete with a Dewey hexadecimal Semantic Web
indexing system or something.
Distributed applications started to appear as the efficiencies emerged for managing logic in different places around the corporate network. One file might have been partially processed on one server and then passed off to another to complete the processing in a batch job. Now packets of information are being passed as messages around the corporate network for processing. These packets can be managed my smaller applications known as services and on the web these services become accessible as web services. Managing the data being handled by these services and the architecture orchestrating it became known as Service Oriented Architecture (SOA).
On the web, programmers started accessing other web sites applications and combining the outputs of them to create a web site that had no data of its own but provided value in seeing property values or crime statistics
The shared information or shared applications models by themselves were the beginning of a social transformation of the web from single managed to multi-managed. The first social users of the web were programmers who collaborated on how systems should work together on the internet. But the ability of common folk to participate in this sharing wasn’t available until the administrative structures were programmed by someone who wanted to automate the sharing.
Shared info and applications
The web initially started with basic interactive applications such as MUDs and bulletin boards which became forums but the techies were satisfied with just having fun. Later the push for business use of those same functions created more socially useful applications. Now the business use of social networks is what people are talking about mostly because there’s money involved. These sites provide customer access, shared data, and shared services across multiple sites bringing together Twitter data, Facebook profiles, publicly accessible blogs, forums, news sources, and whatever else sites can get their paws on.
But the enterprise has trouble dealing with this level of interaction while maintaining control
. The large scale enterprise resource planning (ERP) applications are large and complex beasts. When many people are adapting and self-managing on the web to market interests, the change is rapid and without disaster as long as you don’t look at fail whales and massive hacks. When the corporation manages an ERP adoption as a single project, the results often are poor. Other corporate systems are striving to grab public data and integrate it with internal information in new types of applications such as the social customer relationship management (SCRM
I’m thinking that the corporation needs a new standard for this social and enterprise level architecture. I’d call it Socially Enabled Architecture (SEA) and it would answer the strategic questions of
- how does a business monitor social media?
- how does a business engage with customers through social media?
- how does a business record that social data?
- how does a business share that data?
- what social media makes sense to capture?
- what relationships are useful to capture?
- what application types manage social data the best?
- what processes best manage social data interactions?
- what metrics should be used to determine social task successes?
Value comes by assets and finding better ways to share those assets. We now are sharing more of our assets than ever before in business both of information and applications. Consumers now find that the barrier to entry is so low on business assets that they easily become a service of value when they start blogging or tweeting. Upgrading that to a web site with more data and applications to take the weekend socialite to a business venture takes more understanding of customers and business in general. But the home-style web business is growing because of that and the future looks to continue the trend. Myself, I’m looking forward to more mature SEA web sites in the future.