In our hyper-specialized society, way too often we read what we must, and the longer we live, the less we are open to new ideas.
We keep doing and reading more of the same, also if technology instead enables any citizen able to read to actually access in a day what until few decades ago wasn’t readily accessible even to experts (or would have required a significant financial commitment).
Moreover: the “social” element of technology allows any individual to tap the aggregate knowledge and skills of people relevant expertise- for free.
If you are tempted to say “but how many pretend to be experts while they aren’t”? Well, it is easier, cheaper, faster to identify “knowledge cheats” online than it is offline.
The social impact of technological choices is accelerating, and technology, not only in the “rich” world, is becoming increasingly embedded in your life.
A couple of decades ago, even a book-worm like me had to acknowledge a simple truth: the expansion of the private TV channels implied that, unless you watched TV, you were going to miss part of the social evolution.
But you could choose to ignore TV, and get the “Zeitgeist” through other people who, instead, probably devoted more time to TV than to books.
For your information: books are those things usually printed on paper and provided with a front and back cover- and do not require batteries.
Nowadays, while you are having lunch, if a kid asks a question, often (s)he doesn’t wait for the answer from parents or grown-ups: as I saw few times with relatives and acquaintances… they go for the (online source).
A smartphone and Wikipedia replaced both a visit to the library and the potential embarrassment of having to answer questions on unknown subjects and being expected to deliver a definitive, unquestionable answer.
We are building a “details society”- will those asking a question and finding minute answers for themselves ever have the time and will to think about the “big picture”? Will schools eventually become dispensers of point-to-point information, with no framework of reference?
Or will citizens increasingly delegate critical thinking and innovation to somebody else, content and stupefied by endless amounts of details?
If knowledge is power, do unclassified and uncharted details really represent knowledge?
I think that 1000 bricks might compose a wall- but a pile of bricks without even a virtual blueprint is just a pile of bricks, and does not become magically a wall, unless there is some intent and effort.
This article will start from the “smart” in smartphone (i.e. a technology able to do something more than its “dumb” version), and move onto the social impact of “embedded smart”.
Or: what happens when the “smart” side of technology is so smart, that it delivers to you benefits as a mere side-effect of a huge set of features that you didn’t even consider?
And what could be the impact on our society, at least now and over the next couple of decades?
A little bit too smart
Everything, from phones to bombs, seems to be “smart”, i.e. able to react to changed conditions.
But a smartphone is focused on delivering some specific features- usually when you need them.
An “embedded smart” object is instead able to pre-empt your needs, and deliver services when needed, and not necessarily when you remember to activate them.
But this definition of “embedded smart” is not limited to objects carrying computers on board- even the humble textiles could be “smart”, e.g. a suit that returns to its original unwrinkled status.
Of course- this “embedded smart” is not just an add-on to a set of basic functions, but a structural feature that differentiates the new object from its “dumb” siblings, and probably you read about a supposedly “wonderful” future when your fridge will be able to challenge the objects it contains, and decide to order additional groceries.
I am looking forward to the first lawsuit due to a fridge enforcing a vendor-sponsored diet on its owners… call it product placement on steroids.
Frankly, some of our technology-based “smart” products do not fit my definition of “embedded smart”.
How can you define “smart” a product that reminds you of its own existence not because you need it, but just because this is a way to inform everybody around you that you are yet another customer of that specific brand?
But as with the example of textiles, other products are “smart” without being obnoxious, also if some electronics or sensors help in making the product that you are carrying around aware of the environment and adjusting to its requirements.
A recent example are training shoes reporting how they are being used, i.e. to track down how many miles you run, and who knows, maybe in the future also tracking down your heartbeat rate and level of perspiration at each stage of your run.
A more sci-fi example is represented by augmented reality contact lenses, i.e. contact lenses that, through other sensors, “observe” the environment, and superimpose information on, say, monuments that you are watching.
Ranging from mere historical facts, to real-time imaging or offers of related services and advertisements of nearby restaurant based on your most recent choice.
The latter, a little bit sci-fi for the time being, as the lenses until recently could be kept in your eyes for significantly less than an hour.
Anyway- these are almost “applied science oddities”: but what if the sensors extend to a network, and interact with each other?
“embedded smart” and society
Did you ever see a car parking itself? Or warning you when you were getting too close to another car?
Somebody would call both “smart”, but it is a mere matter of bits of technology (overever complex) focused on just a single task, and reacting when that task is needed.
Instead, think about adding something that is considered smart not for the individual, but for society at large.
Interaction between sensors could enable a net transfer of responsibility to suppliers of vehicles for their customers’ compliance with “social laws”, e.g. alcohol level, use of drugs, creating a “virtuous” (?) circle of converting what you buy into a tool to enforce not just laws, but also “suggested behaviour”.
And this could further be extended, by having in the end technology-induced social law and reduction of degrees of freedom.
Think about your passport and ID cards with sensors, that in some countries are actually read by anybody in any shop: I remember when in Brussels I was asked my ID to reserve a haircut, and saw that they were actually storing the information from my ID (luckily, I didn’t have yet an electronic one), capturing the data of my payment method, and so on and so forth.
For a haircut? Call it the Far West of privacy- and then we Europeans negotiated the “safe harbour” to ensure compliance with the data management in the US, while our own shopkeepers store data that have no justification whatsoever… only because the technology required is cheap.
Now, imagine (as it actually happens in some countries) that, for tax reasons, the cash register, already linked to that computer (as I saw in my case) becomes now connected to the network to process the bill and report the income in real-time.
Do you really expect a mere shopkeeper to be able to enforce better data security than the one available in most corporations, who could easily afford the best security experts to ensure a decent level of compliance?
If you are realistically unable to ensure compliance, then you should consider a basic rule: data that aren’t stored and/or transmitted cannot be stolen.
And its corollary: data are points, dots of information- and connecting the dots does not necessarily deliver a perfect picture, you cut some corners, miss some details, and, sometimes, you are connecting the wrong dots.
My fellow quantitative consultants and engineers often look for more data, assuming that this could guide toward a more precise identification of the context within which the “degrees of freedom”.
In some cases, I saw the impact on businesses: not everything can be quantified, and, unless the conversion from “qualitative” to “quantitative” considers a series of imponderables according to their potential impact, you risk forcing a complex reality into a simplified model that ignores what is more relevant.
Just as an example: the benefit delivered by a cent spent is not necessarily the same, even across two parts of the same organizational entities; if you adopt a model that requires a linear reduction, the impact generated could actually be unbalanced across the business.
stirring a hornet’s nest
Now, imagine shifting from qualitative, imperfect laws to laws based on quantitative information derived from massive amounts of data collected by unknown, unqualified entities, which, in turn, collect the information from sensors, whose positioning does not necessarily follow homogenous rules.
Would you consider, according to this description, those data to be enough to justify shifting the debate from one side to the other?
But then- it is exactly what in some countries happened to identify credit worthiness, or to collect information that is then shared about, say, agricultural production.
If you limit the number of “data collection points” (e.g. the shops), you can then enforce some rules on how, when, what is collected, and how, when, and why is processed, distributed, and shared with third parties.
What if you increase exponentially the number of collection points?
It might become a matter of etiquette: should you say that you are using “embedded smart” clothing, i.e. clothing that is able to adapt to the environment, or even exchange information with other items?
What if your “smart shoes” store also the locations that you walked in? Should the shop you walk by able to see those data, and maybe make offers based on your favourite itinerary?
But then, considering how little “smartphone” companies care about privacy, what about interoperability, i.e. what if your own sensors interact with somebody else’s sensors and “chat”, or build up a collective memory?
A collective memory? Yes, something containing not only information about what you do with the sensor, but also about its interactions with other sensors? In that case, when your sensor are “talking” with shops, you are sharing not only your own data, but also data about your “real world” social networking.
If you think that this is a little bit paranoid and far-fetched or sci-fi, think again: your smartphone is already “chirping” about how you use it, and what better than your shoes or clothes?
In the end, it is always the usual, boring issue: who manages and defines which data can be reported back to authorities, producers, middlemen?
Long ago, I posted on this blog an article about some patents from Google that enabled the creation of a “floating state”- outside any jurisdiction.
Now, a “floating startup incubator”, right off California, bypassing the need for a working visa and so on is a little bit more than just a business plan.
And a couple of satellites recently has been launched from a floating platform- to be precise: American satellites from a Russian platform.
Imagine if sensors, using the new IPv6 technology that promises to deliver an individual IP address to each device, however small and including disposable ones, were to be built and delivered by a company that is based in no jurisdiction: do you really believe that we could be able to monitor the import of “unlisted” sensors?
Remember: with the new technologies and plenty of power sources around (e.g. read the previous postings on the recovery of kinetic energy). any imported item can be made “smart”, without any need of batteries or any visible source of energy that actually discloses the presence of a sensor.
And if you go around markets in Europe, you can easily see how many textiles and other objects, despite all the controls, borders, etc are imported through unofficial channels.
Who should be involved to provide expertise to define “pre-emptive” regulations and legislation, and does expertise exist outside vested interests (i.e. industry and ONGs or advocacy groups)?
But while questions are important, thinking about the social impact of technology implies moving a little bit forward- and seeing what could actually happen.
The upside: social shift
You probably read few articles about the “augmented reality” contact lenses.
If not, here is the story.
Have ever seen a war movie set in recent times, where the pilot can see information about the flight on the window or his own helmet? “Augmented reality” contact lenses could actually deliver the same service- but to anybody and anywhere.
As I wrote above- once few “technical glitches” are solved (e.g. producing something that you can wear for more than few minutes).
But why just expanding your sight? Currently, some “artificial noses” are being tested, and maybe one day they could match and exceed trained dogs, e.g. to identify drugs, banknotes, or even the smell associated with lubricants used in machinery.
A little bit further down the road, these sensors could further enhance reality, e.g. embedding within your clothes the ability to “smell” when the other party is sweating, or “hear” the acceleration of the heartbeat, or, just to stay with the eyes, identify if somebody is lying, by looking at a complex set of physical signs (imperceptible facial muscles contractions and relaxations, eye movements, the dilation or contraction of the pupil, the increased rate of sweating, etc).
It is only a matter of miniaturization and high-volume production, and cost-per-unit reduction.
Maybe at first such an “augmented reality/embedded lie detector” suit would cost as a small car- but, eventually, it could become a mass-market product.
Another promising technology is actually the rediscovery of something pre-dating banks, where a society with a low level of reading and writing skills could actually find worthwhile to sign an agreement with a simple handshake.
Enter technology, and you could even do a wire-transfer while shaking your hand: a new twist to “giving a helping hand to a friend”.
Is a real evolution the ability to deliver a package or suit containing all those features? No.
What new technologies unable is something more subtle, that you will start using more and more in pharmaceuticals: mass customization.
It is a simple concept: if you couple higher flexibility in production machinery, enabling an economically viable 1-unit production setup, with the ready availability of plenty of “off-the-shelf” modules that can interact with other sensors, each customer could design and receive its own unique device, using just a Lego ™ brick approach.
Now, imagine a future social interaction, where you do not know which sensors are available to those that you meet.
And think about a series of recent social interactions, in different environments, known and unknown: would you have behaved in the same way, and said what you said, if those you met had had at their disposal sensors able to identify if you were posing or lying, or assess through the mere contractions of muscles your next move?
And all this using existing sensors, as those that I listed above.
Of course, the Holy Grail of sensors would be something able to spot and give a meaning to the “magnetic field leakage” of your own brain.
If there is electrical activity, there is a magnetic field, isn’t it? And our brains aren’t Faraday’s cages.
Or, if you want to delve into sci-fi, maybe some mad genius could invent a gizmo to detect how our brains interact with their environment, and associate specific patterns to specific thoughts.
Albeit I think that that could work only on the macro-level, if the patterns of interaction related to specific thoughts are linked to the pattern of associations “cabled” in its own unique way within each brain, while maybe macro-level behaviours (fear, hate, love, boredom, etc) are more or less represented in the same way due to our brain structure.
But that probably is something that would anyway need a significant chunk of equipment, and therefore not so easy to conceal and convert into something “wearable”- so, lets leave that on sci-fi books or research labs, for the time being.
I worked in ICT professionally since 1986, and little bit longer I have been studying and practising it.
Beside software engineering, I worked for the last 20 years into what I could call “organizational engineering”, but I actually applied knowledge that I had acquired through books and “on the ground” long before I even read a little bit about “change management” or “decision tables” in the late 1980s.
Social or software engineering cannot avoid the fundamental weaknesses that I described at the beginning of this article: you can build a model, but it will be a finite model with a finite number of degrees of freedom.
And reality, moreover social reality, do not necessarily comply with your model: too many variables, which compounded could generate a completely different outcome from what you expected, planned, desired.
Therefore, a technology-based social change, as it starts with something (technology) whose tolerances are limited by both design and physical constraints cannot be left to us engineers: yes, it is the old idea that war is too important to be left to generals.
Any technology-based social model requires something that often is missing from law: a constant monitoring and reporting to identify if, whenever any of the parameters changes, it still makes sense and, more important, it is still socially acceptable.
And this is an interesting side-effect: the old society and old way of making laws could tolerate checking the social viability of rules, regulations, edicts to be facto postponed to the next electoral cycle, while technology, with its unforgiving limitation of the degrees of freedom delivered to anybody who uses it, requires a constant monitoring.
Beside the funny story above, about how social interactions could change, how would our political system evolve? Should society, due to its complexity, delegate choices to “experts”?
Also if I could obviously be between those benefiting from this change, frankly I find it a nightmare scenario.
Because I had enough political sense and experience to avoid thinking that something designed by and for a certain Weltanschauung is independent from those who manage it.
Despite what many of my fellow colleagues say, I never believed that technology was, has been, or will ever be neutral.
If it were really neutral, then we could build the perfect model and perfect machine, and then dismantle our political system, replace it with polling, and put a set of machines in charge, using the Shuttle “decision making” style (i.e. an odd number of computers that have to concur).
I still prefer spirited debates, even if sometimes they verge on the pointless end- debating for the debating sake.
But I would rather risk having few pointless debates than create so many barriers to debating, to basically make impossible to debate unless you already were coopted into debating.
When I see members of a Parliament, as in my birthplace (Italy), that are sons of other members of the Parliament, and have been there more than 20 years, I wonder if we can still claim to be a democracy, or instead we are converging toward a fomally representative system that actually prizes convergence, i.e. if our democracy is becoming more and more similar to a mechanistic “more of the same”, as anything outside the scheme is not allowed or entitled to even enter the discussion.
Therefore, if we let technology guide our social model, should be consider that, while the formalization must be done by a group of “certified” (elected) officials, the debate and monitoring should increase entropy, and therefore carried out elsewhere?
debating vs. complaining- be proactive
What happens when you shift marketing and R&D downstream, to the distributors and the end-users?
This could happen with “off-the-shelf” smart technologies, i.e. components that can be selected from a website and then assembled.
If a new product is identified by unifying existing “smart” technologies, companies could then evolve integrated products for which there is actual demand- and do that on a really short cycle, while converting their initial customers generating the specifications and demand into de-facto marketeers/agents.
This would shift the relationship with “embedded smart” customers: as in business intelligence, some will just be ordinary consumers, others will contribute back (and should be involved in the process, eventually) often, others will contribute that individual “white swan”, others yet will start as consumers and become competitors.
Changing the IPR concept, your product might end up as a component of a product sold by a “virtual” company (I posted old articles and experience-based research on “virtual companies” between 2003 and 2005, at http://www.businessfitnessmagazine.com).
“embedded smart” will be a two-way acknowledgement of the value added: if you just do the electronics, but somebody else, by connecting your electronics with that produced by others were to deliver a service and value added, you could even end up with a behemoth such as Samsung actually deriving most of its profit not from high-volume, long-term customers, but from “vicinity engineers”, who create a new product by buying retail what they need, and then finding new “customers” in their own neighbourhood.
Obviously, those “vicinity engineers” would have no clue about post-sales service, RMAs, legal boundaries, etc- and, therefore, a company such as Samsung could offer those services too, either in partnership or through a part of the “after sales” operation.
Stated in a simpler way: if you assemble a product from components that each are complying with all the regulations, your product does not necessarily comply with those and other regulations- therefore, you assume a risk when you assemble your own product, but when you start selling it, you are entering a minefield.
By supporting those unusual post-sales activities, industrial behemots could actually innovate at a faster pace.
And this brings up another issue: who should decide the lifespan and location of the information collected by sensors?
Moreover: who should decide the lifespan of the sensor itself.
Aren’t we risking to introduce infoglut on a massive scale, creating compulsive information habits that will have everybody used to a decision making/choices lifestyle based on massive volumes of minutiae, an almost perfect memory of anything you did and any interaction?
Shouldn’t we extend schooling from transferring skills, as it became over the last few decades, back to transferring mores?
You would still have the freedom to choose you own mores, but at least we would have, in a highly mobile, intensive data mining society a guiding light on what is socially acceptable, or at least pointers to where to check for limitations.
To paraphrase an old movie: this article is not the beginning of the end of a discussion about the social impact of technology, but the end of its beginning.
There is no need to talk about smartphones that sometimes could be called scoundrel phones, sifting through reams of data for purposes that weren’t part of the bargain when the customer bought the phone.
Imagine replacing the cameras, with facial recognition, and augmented reality interaction with your sensor: what would happen then? Micro-taxation on each social interaction, according to the amount of common resources that you are using, e.g. how many minutes you sit on a bench in a park?
Yes, technology is not neutral: also if we, from the technological/engineering side, way too often overhype the benefits, and dump everything else under the “negative externalities”- what somebody calls side-effects or collateral damage (note: if you “interact” with me, unless you are a military or associated, please refrain from using phrases more appropriate to a different context just to sound more resolute, while instead sounding just careless).
The more technology becomes embedded in society, the more technology is a social choice, not simply a technological option.
Enjoy your technology- cum grano salis…