All posts by admin

The Nouveau Engineer

Data engineering is set to be the profession of the future – here’s why

The centrality of artificial intelligence across sectors is glaringly obvious today – more obvious than ever before owing to the COVID-19 pandemic. This has, of course, had direct bearing on the landscape of skills required to stay relevant – and to carry industry forward. AI and Cloud computing have introduced possibilities for developers at a scale that was inconceivable even until a decade ago. Hence, the new task of today is to both engineer these developments and optimise the processes of analysis and inferencing. Enter: TheData Engineers!

The New-age Engineer

The engineering specialty has transformed drastically over the past few decades. Previously, before the age of the Cloud, data engineers and developers primarily managed production processes and scale within the software itself – there was no real dichotomy between software logic and hardware resources. Today, however, Cloud and elastic computing techniques have broken engineers into distinct sects to build software solutions, products and services and hardware development, management and services in order to truly reap the benefits of these elastic computing platforms.

Back-end Engineers: These are the people who will be involved in the building of the software logic and algorithms that is to be implemented. They will work with hardware development and look to optimise software based on computing power. This will (almost always) be projects that require massive scaling – especially when scaling involves much more complexity in tasks than a simple ‘if-then-else’ logic chain. The need for specialised expertise is, hence, a direct product of ever-evolving software complexities and computing power.

Front-end Engineers: These are the people involved in the topmost application layer and the UX/UI interface for the user. An engaging, logical and adaptable interface for Man-Machine interaction requires considerable dexterity and is a major part of the development process. Streamlining app development and production, as well as interfaces, however, still requires a major paradigm shift (that seems to be the offing).

DevOps Engineers: These are those individuals who ‘are responsible for scaling the software applet (the code container) onto the elastic Cloud for deployment so that it can effortlessly cater to as many users as are expected, and elegantly handle as much load as needed.’ Effectively acting as the route to the Cloud, these are people who must have in-depth knowledge of Cloud as well as software infrastructures.

Data at the Wheel

This paradigm shift in organisational structure is effectively going to be driven by one major cog: Data. Data is gradually shifting its role from being a cog in the wheel to being (almost) the wheel itself.

“Both Machine Learning and […] Deep Learning, are disciplines that leverage algorithms such as neural networks, which are, in turn, nourished by massive feeds of data to create and refine the logic for the core app.” These require the hand of data scientists, who design and train algorithms in order to execute system logic. This task, is, however, much more gruelling than it initially appears – because it is in the tuning, and retuning (and retuning) of this algorithm where the battle for the organisation is either won or lost.

The fact that almost all selection, optimisation and management of processes essentially rests on how clean data can be made right from the pipeline shows the importance organisations must place on data procurement and formatting. Machine/Deep learning algorithms require copious amounts of data reconfiguration to make it viable for usage and expansion. Hence, processing power and computational abilities need to be made very powerful in order to truly handle the test that is big data.

So, we turn back to data engineers once again.

The Art of War – AI edition

How data scientists are changing the global military landscape

The use of Artificial Intelligence (AI) in a country’s military forces is not a new phenomenon by any means. In fact, since the mid-2010s, the world has already been witnessing a steady arms race for better military AI. The quest for military AI dominance is essentially a precursor for dominance in other sectors a well, especially when countries are seeking both economic and political advantage.

Project Convergence

According to the SIPRI Military expenditure database 2020 factsheet, the United States has a net defense spending of about US$732 billion annually. This accounts for about 38% of global defense spending and is greater in amount than the next ten countries, combined. Hence, it should come as no surprise to note that the training exercises for the United States cavalry are also the most technologically advanced – and the most expensive as well.

Dubbed ‘Project Convergence 2020’, the exercise held at the Yuma Proving Ground in Arizona was set in the year 2035 and was the first in a series of annual demonstrations showing how the Armies of the future would fight their battles. It used Artificial Intelligence and autonomous systems to ‘take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat’ in real-time during the course of the war. According to Brigadier General Ross Coffman, the director of the Army Futures Command’s Next-Generation Combat Vehicle Cross-Functional Team, AI was used to autonomously conduct ground reconnaissance, employ sensors and then pass that information back.

The exercise showed that the use of autonomous technologies has drastically reduced the sensor to shooter time-gap – from 20 minutes to 20 seconds, depending on network quality and the distance of transmission between the point of collection and its destination. Space-based sensors operating in low Earth orbits were used to capture battleground images to be sent to the TITAN ground station in Washington, where they were processed and fused by a state-of-the-art AI system called Prometheus. TITAN was envisioned as a “scalable and expeditionary intelligence ground station”, supplying data to Prometheus to be fused, identified and analyzed.

FIRESTORM

Once threats picked up from TITAN are sent to Prometheus, the targeting data is then passed on to the Tactical Assault Kit, a software-based program giving operators and data scientists an overhead view of battlefield positions. Additional images and live feeds can also be pulled up, as and when needed.

The best response is then determined by the Army’s indigenous new AI-based computer brain – the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM. According to Coffman, FIRESTORM “recommends the best shooter, updates the common operating picture with the current enemy situation and […] admissions the effectors that we want to eradicate the enemy on the battlefield.”

The FIRESTORM program processes terrain, weapon availability, proximity to threats and other auxiliary factors in order to determine the best-response firing system. Data scientists and operators at the FIRESTORM unit process the information coming in and send orders to on-ground soldiers or weapons systems within seconds of a threat being identified. It also provides critical target deconfliction, ensuring optimised and efficient deployment of weapons.

By using aided target recognition and machine learning to train algorithms and identify types of enemy forces, military prowess has improved by leaps and bounds – truly taking the art of war to the next level.

Benefactor: Microsoft

Why Microsoft’s GPT-3 license should be a win-win for all parties involved

Ever since its release, OpenAI’s GPT-3 series has persistently stayed at the centre of discussion among researchers, developers and entrepreneurs around the world. Regarded as one of the world’s most advanced machine learning-based text (and image) generators, it uses (in its full capacity), over 175 billion machine learning-based parameters to train its language representations – an increase of almost 11,500% compared to its predecessor (for reference, the entire English Wikipedia constitutes only 0.6% of GPT-3’s total training data.)

The sheer immensity of scale at which it functions, has made the GPT-3 one of the world’s costliest natural language processing algorithms, costing the firm a whopping US$12 million to research and train the final model. Additionally, the attached costs also involve (1) tens of thousands of dollars in monthly cloud computing or server and electricity costs for running the model; (2) possibly more than a million dollars in yearly retraining costs due to model decay; and (3) additional costs of customer support, marketing, IT, security, legal and other requirements of running a product. This could be in the tens of thousands of dollars based on the number and size of customers OpenAI acquires.

Financing the GPT-3

In the era of commercial AI, research labs like OpenAI usually require deep pockets from wealthy tech firms in order to finance their research over the long term. It was in 2019 that OpenAI made the transition from a non-profit organisation to a for-profit company in order to cover its costs in their long quest for the development of artificial general intelligence (AGI). It was also then that Microsoft came in with a US$1 billion investment into the firm.

Given its steep training costs and invaluable potential, OpenAI faces a great challenge in even breaking even on the costs of developing, training and running its huge GPT-3 neural network, let alone turning it into a revenue-generating business. It should thus come as no surprise that it has decided to commercialise its landmark natural language processor with a carefully crafted slabbed-payment scheme along the lines of a SaaS (Software-as-a-Service) model, instead of making it open-source. This has been done through its tie-up with Microsoft, who in September 2020 declared that it would be exclusively licensing the GPT-3 platform as a part of its ongoing partnership with OpenAI.

No Precedent

Given the cost structure for the GPT-3 and its pricing plan, it is expected that OpenAI will require several dozens of customers opting for its Build Tier plan (US$400/month) just to break even on running costs. Additionally, it will require much more to cover the expenses involved the development, training and (eventual) retraining of the model.

The issue in this, however, is the fact that there is no precedent to this. GPT-3 is the first-of-its-kind natural language processor capable of both zero- and one-shot learning (in image processing, for example, instead of assuming image generation as a classification problem that requires massive training data, GPT-3 turns it into a difference-evaluation problem, supplying images with similarity rating in proximity to the image being trained). Hence, finding the appropriate use case for businesses becomes a challenge.

Historically, many (possibly useful) products have died swift deaths because they could not find the right product-market fit that allowed them to acquire new customers in a cost-effective way. So far, however, OpenAI claims to have received several thousands of applications to run its GPT-3 model (on a free beta version) for businesses to find an appropriate application for the software model. But what will be crucial to OpenAI in making the business model successful over the medium run, is their licensing relationship with Microsoft.

The Microsoft Factor

“Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.”

Kevin Scott, Chief Technology Officer, Microsoft.

The point to note here is Microsoft’s customer base. Over a million companies globally use the MS Office suite and the digital assistant Cortana. Additionally, Bing is the world’s second-most popular search engine. These are technologies which make prodigious use of natural language processing technology – and the adoption of the advanced GPT-3 engine could be exactly what both parties need, moving ahead.

For OpenAI, the hundreds of millions of users will provide key insights and experience that will allow GPT-3 to grow and adapt. For Microsoft, they will be the exclusive owner of one of the world’s most advanced AI technologies. Seems like a win-win, if there ever was one.

Data Engineers Set to Fly

From CIA to Morgan Stanley, data engineers are in high demand

From Morgan Stanley to Ericsson, IBM and even the CIA, the US spy agency, nearly every organization is looking to hire Data Engineers. A LinkedIn search shows that almost 15,000 openings for data engineers while Indeed.com shows over 10,000 openings. However, there is a confusion about the difference between a data engineer and a data scientist, the latter having a higher glam quotient in the minds of the average person. In simple terms data engineers process the raw data while data scientists explore the data to find actionable insights. So, without data engineering, data analysis won’t be possible.

The Dice 2020 Tech Job Report labelled data engineer as the fastest growing job in technology in 2019 in the US, with a 50% year-over-year growth in the number of open positions. Interest in the position has been increasing over the years, as organisations discover data engineers are they key personnel for unlocking the value of their data. The Dice report, which analysed job postings from the past year as well as the US’ top tech-job hubs, says the top five skills descending order). The Dice report also notes that AWS (i.e.,Cloud) skills are in high demand for data engineering jobs, as well as Scala and Hive skills.

Data Engineers, in any organisation, work together with data consumers and Information and Data Management Officers to determine, create, and populate optimal data architectures, structures, and systems. Let’s look at what CIA wants its future data engineers to do. The primary goal is to increase discoverability and retrievability, facilitate dissemination, and ensure the delivery of timely and relevant intelligence. The agency is being overwhelmed by the amount of data it collects. The job description for Data Engineers would, therefore, include designing how the data will be stored, consumed, integrated, and managed by different data entities and digital systems.

Military and intelligence agencies around the world deal with a multitude of sensors like, for instance, the kind of tech found on drones. The CIA’s own sensors suck up incalculable mountains of data per second. Officers badly want to develop massive computational power within a relatively small, low-power sensor so the sorting can be done readily on the device itself, instead of being sent back to a central system. Data Engineers – with their extensive knowledge of data manipulation, databases, data structures, data management, and best engineering practices – are therefore going to be crucial players in a complex data-crunching organisation like the CIA.

In a prior development, CIA Labs, the agency’s newly-formed solutions arm, had already announced recruiting and fresh technical talent in diverse cutting-edge domains. These includes areas like artificial intelligence, data analytics, biotechnology, advanced materials, and high-performance quantum computing. Incentives are also being considered by CIA Labs to compete with private research establishments. CIA scientists will now be able to publicly file patents on the intellectual property they work on – and collect a portion of the profits too.

The Hired’s2020 State of Software Engineers report also show that demand for data engineers rose a respectable 45%. The clear leader in the engineering profession are those who can build virtual reality and augmented reality (VR and AR) applications. According to Hired, demand for AR/VR engineers rose a whopping 1,400% in 2019. The growth is due to the maturation of AR and VR technologies, the company says. Glassdoor estimated the average base pay for data engineers at US$102,864annually! This is based on earnings reported by thousands of companies.

Ransomware Plague COVID Trials

As nations race each other in search of the elusive vaccine, covert operators play dirty tricks.

Research that deals with data has long gone digital. That is of course a boost to data crunching and more accurate findings, but no longer can we ignore the perils of the digital world. Cybercriminals are increasingly launching ransomware attacks on the healthcare sector. Of late, the COVID-19 vaccine trials have turned out to be a soft target. More unnerving is the fact that quite a few state players are also engaging in this dirty game, along with traditional lone-wolf cyber bullies.

The latest in a long string of major attacks came on 20 September when eResearchTechnology (ERT) – a Philadelphia-based developer of healthcare software – found that its systems were under a ransomware siege. Employees in clinics and labs that use ERT’s software realised that they had been locked out of their data. ERT has been quick to allay fears regarding patient safety and stresses that clinical trial patients have not been impacted by the ransomware attack. However, their affected clients admitted that the attack forced researchers to continue patient-tracking for the trials via pen and paper.

ERT products are in widespread use for clinical trials across Europe, Asia and North America, including several trials and tests for the COVID-19 vaccine. The company website states that their software had been used in three-fourths of drug trials that got approved by the FDA last year. Their clients include leading names like IQVIA, the contract research organization behind AstraZeneca’s COVID vaccine trial, and Bristol Myers Squibb, the pharma company that leads a consortium working on developing a rapid test for the virus. However, two other pharma majors in the COVID vaccine race – Pfizer and Johnson & Johnson – said their trails were not impacted as ERT is not the technology provider for them.
While confirming the attack to the media, Drew Bustos, ERT’s vice president of marketing, asserted that the organisation had taken quick measures to address the threat.

He said that they immediately took the systems offline, sought help from external cybersecurity experts and notified the Federal Bureau of Investigation – and now it “has been contained”. Obviously, the company did not admit if any ransom was actually paid to get the systems unlocked. Unfortunately, of late, several organisations had to take the pay-up route to get their data back.
Pharma companies and drug labs have been repeatedly attacked by international hackers in the pandemic situation. Sources report more than a thousand ransomware attacks on American cities, counties and hospitals over the past 18 months. This is because some nations are taking recourse to underhand methods to track progress by other nations in tackling the coronavirus. According to New York Times reports, the F.B.I. and the Department of Homeland Security had directly warned the US administration in May of Chinese government spies trying to steal American clinical research through cybertheft. The NYT report also mentions that according to security researchers, over a dozen countries have redeployed military and intelligence hackers to gather any available information on how other countries are doing in terms of a vaccine or cure.

Merely a week before the ERT attack on eResearchTechnology, another major ransomware attack was launched on Universal Health Services, a key hospital chain with more than 400 locations. NBC News had termed this as “one of the largest medical cyberattacks in United States’ history.”
Other nations are not being spared too. Only weeks ago, Russian cybercriminals attacked 30 servers at Germany’s University Hospital Düsseldorf. As systems crashed, the hospital had to refuse emergency patients – leading to the death of a woman in a life-threatening condition. Although indirect, experts are considering this incident to be the first recorded death due to a cyberattack.

Spy Agency Goes to Market

CIA focuses on AI, Quantum Computing; creates CIA Labs to earn from innovations following the NASA model

The US space agency has always been doing it: spinning off space technologies for everyday use. Even when the pandemic struck, it rose to the occasion and patented an improvement to an oxygen helmet used by astronauts in aid ofCOVID-19 patients. NASA’s Johnson Space Center in Houston, home of the Human Health and Performance Center, and the Technology Transfer Office combed through more than 2,000 technologies and software programs created over the last decade, looking for anything that might be useful in confronting the health crisis at hand.

The centre submitted a portfolio of 34 open source technologies to the United Nations. It is also helping a handful of groups update and manufacture a simple, human-powered ventilator originally designed for the space program. From memory foam to infant nutritional formulas, NASA has always come up with incredible innovations and eclectic products. Now CIA, the US spy agency, is also coming forward to allow its scientists to patent their innovations.

The CIA has always been researching, developing and realising cutting-edge technology. And now it wants to lead in fields like artificial intelligence and biotechnology. But, like all private corporate organisations, it is facing difficulties in recruiting the right kind of talent – more so because the spy agency cannot match mouth-watering salaries and brand attraction of Silicon Valley companies. To overcome this hurdle, CIA is allowing its officers to make money from the innovations that come from within the agency.

The agency’s solutions arm, CIA Labs, is set to recruit and retain technical talent by offering incentives to those who work there. Under the new initiative, CIA officers will be able for the first time to publicly file patents on the intellectual property they work on – and collect a portion of the profits. The agency gets to keep the rest of the amount. CIA is hoping that in this way its research and development could end up paying for itself. CIA Labs is looking at areas including artificial intelligence, data analytics, biotechnology, advanced materials, and high-performance quantum computing.

According to MIT Technology Review, it’s not the first time the agency has worked to commercialize technology it helped develop. The agency already sponsors its own venture capital firm, In-Q-Tel, which has backed companies including Keyhole – the core technology that now drives Google Earth. In 2009, GainSpan Corp., a provider of low-power Wi-Fi semiconductor solutions, announced a strategic investment and technology development agreement with In-Q-Tel. GainSpan’s GS1010 chip was a highly integrated ultra-low-power Wi-Fi system-on-chip (SOC) that contained an 802.11 radio, media access controller (MAC), baseband processor, on-chip flash memory and SRAM, and an applications processor, all within a single package.

CIA also works closely with other arms of government, like the Intelligence Advanced Research Projects Activity, to do basic and expensive research where the private sector and academia often don’t or can’t deliver the goods. What CIA Labs aims to do differently is focus inward to attract – and then retain – more scientists and engineers and become a research partner to academia and industry.

Officers who develop new technologies at CIA Labs will be allowed to patent, license, and profit from their work, making 15% of the total income from the new invention with a cap of US$150,000 per year. That could double most agency salaries and make the work more competitive with Silicon Valley.

The agency is being overwhelmed by the amount of data it collects. Military and intelligence agencies around the world deal with a multitude of sensors like, for instance, the kind of tech found on drones. The CIA’s own sensors suck up incalculable mountains of data per second. Officers badly want to develop massive computational power within a relatively small, low-power sensor so the sorting can be done readily on the device itself, instead of being sent back to a central system.

Questions around how efforts to develop new technology should proceed in the new set-up, especially at an agency that has long been a fundamental but clandestine instrument of American power, will always be there. Some inventions have been non-controversial: especially during the Cold War, when the agency helped develop lithium-ion batteries – an innovative power source now widely used by the public. More recently, however, during the war on terrorism, the agency poured resources into advancing nascent drone technology that has made tech-enabled covert assassination a weapon of choice for every American president since 9/11 despite ongoing controversy over its potential illegality.

However, the way things are progressing, the day is not far when you might be able to purchase the latest “Bond Gadgets” off the shelf at your neighbourhood departmental store!

Into the Next Zone

AI industry gradually shifting from data-centric to knowledge-centric models to meet future requirements

As the world becomes more reliant every passing day on self-learning algorithms and data-based models of logic and machine reasoning, Artificial Intelligence (AI) has become a household term. The tech industry too is rising high on automation and data-driven AI models. But some experts have already began to doubt whether data has served its day, and if it is now time for AI technology to go beyond petty ‘data’ and look into the much wider domain of ‘knowledge’. What significance does this new school of thought hold for the future of data science? Let’s take a quick tour of the basic concepts to understand.

It has been over 50 years now that AI had burst onto the technology arena. Back then, possibilities were endless, and the enthusiasm percolated to all walks of society – with much interest in robots and automation all round. This craze was ably fuelled by sci-fi writers and filmmakers. Sci-fi writing witnessed a boom between 1950s and 1990s, riding high on the wings of two technological wonders of the day – robotics and space missions. Hollywood took the cue and churned out one blockbuster after another on similar themes.

However, AI still remained at the theory level except for very niche purposes. It was only after computing technology began to advance at breakneck speed in the 1990s, and data-crunching became unbelievably easy, that AI models could really be put to actual use. Then came the Information and Communication Technology (ICT) boom which made possible dissemination, collection and collation of data in real time. Ever since, data-based algorithms have been taking over nearly every repetitive task that businesses require. And now that idea has extended to the Internet of Things (IoT) where every device can be interconnected and stay in sync real time, bridging distances both in terms of space and time.

However, with the generation, collection and analysis of data forming the backbone of all business algorithms, the question of data ownership arises. Companies build their decision models on proprietary data. However, with time and ever burgeoning competition, proprietary data will not remain as unique a business asset and AI strategies based on such data will lose their edge. According to experts, this is where AI-based businesses would need to shift their focus to remain sustainable –from data-based AI strategies, to knowledge-based AI strategies.

It has been the current trend among businesses, especially start-ups, to place data acquisition at the heart of their business strategy. The data sets they gather and their long-term strategy for acquiring additional proprietary data – feed the AI-based tools and models they use. This approach has been the backbone of commercially developed of AI models. Fundamentally, sophisticated AI models voraciously feed on such big data to analyse and derive knowledge and insights, and they need a critical mass of big data for machine-training and optimization of algorithm. Companies such as Google and Netflix have developed and curated massive and authoritative data sets over a long period of time. However, as public data becomes abundantly available, it would no longer be possible by any single player to hold on to such data as proprietary. With more players developing capability and collaborative data-sharing gaining acceptance, experts feel proprietary data will run out of steam within the next ten years. But the AI-based models would still need the input to run on – and this new input would now be ‘knowledge’-based.

What, then, is this knowledge? As has been famously said, currently “we are drowning in information but starved for knowledge”. Instead of piecemeal raw data, new decision-making models would be deriving inputs from more meaningfully processed inputs – customized to the needs of the company that uses it. This will lead to the development of innovative and new frameworks and business models. The new approach will require collaboration between diverse stakeholders to bring together data, information, AI models, storage, and computing power – and the resultant output would be creation of ‘knowledge’ – the new proprietary asset.

A good example is the Israeli Innovation Authority, which had already launched a pilot program for knowledge-based cooperation between hospitals and technology start-ups in 2019. It facilitated the exchange of raw healthcare among the hospitals, and between hospitals and start-ups to facilitate generation of new knowledge based on the inputs. Analysts predict that the industry will soon be witnessing changes aimed at this transition. More organisations would be laying the foundations for a knowledge-centric era, in which “asking the right questions, looking for the most relevant predictions and designing the most disruptive AI-based applications” would be the game changers.

Doing Business in a New World

How organisations will transform under the impact of the COVID-19 pandemic

Let’s face it, nothing is going to be the same again once the global pandemic recedes. By now, everybody knows that. There has been a lot of talk about the “new normal” that is going to take over the ways we work, socialize and live. Behavioural changes will definitely happen, no doubts about that. But apart from changes in individual routines, a far greater change is looming in the horizon. Organisations are going to change the way they conduct and operate their business and manage their people. So while much discussion is going on about how the employees will adopt in the days ahead, let us shift our focus to understand the possible changes that are going to alter the organisations. Change they must, because what worked in the past, won’t necessarily work now or in the future in a post-pandemic world.

Logistics

The physical aspects of conducting business is certainly going to be under the scanner. With people working from everywhere except the office, the whole concept of setting up a physical shop is now redundant. In one sweep the organizations now have thousands of end-points from where to conduct operations – and those are our homes. While this will definitely allow freeing up of real-estate holdings, another change it will force is doing away with the idea of having centralised headquarters. Even if employees do attend office in future, they might not be required to travel all the way up to ‘that one big building located at the central business district’ of a major city. Instead, companies might retain just one skeletal headquarters for administrative purposes and distribute the work centres across the country. An employee would then simply log in from the nearest office branch – much like the way we use bank ATMs.

This would impact the HR approach too. Talent can then be tapped from anywhere, without bothering about the feasibilities of relocation. The company gains from the availability of a resource pool with no boundaries and lowers hiring cost, while the worker does not have to leave home and family to land a plum job.

Tech Infrastructure

It is obvious that, with everything happening remotely, digital transformation would now become a necessity and no longer an option. In the new normal, all operations, delivery and customer engagement will happen over digital platforms; hence, organisations would either be digital or cease to exist. It has already been demonstrated that the IT sector – which already had a robust remote-working infrastructure in place for many years – coped best in the current WFH situation and is even on a growth mode.

As a spiral effect, investments in retraining all workers would go up initially – because now everyone must learn how to use the host of new online collaborative tools and remote performance software. As with organisations, the individuals who don’t or can’t upskill will not survive.

We will also be seeing a greater use of automation and robotics for routine processes – both to eliminate human dependency and in a bid to make tasks contactless. As a result, artificial intelligence and AR-VR technologies will be much in demand. IoT is already driving towards unification of gadgets and technology, which will gain a major impetus now. And data-based decision models will be the rage – because faced with an uncertain future, organisations would prefer to predict future disruptions with greater accuracy.

Cybersecurity and Privacy

With all business transactions getting online, cybersecurity is bound to be a major concern. It always was, but back then companies kept their own terminals are servers secured within multi-layered capsules of digital firewalls. Now, each home connection is a terminal node for the organisation – and that poses a real challenge. The onus is now on the firm to secure firm data on one hand and protect the workers operating over diverse individual data-connections from cyberattacks on the other.

Going digital also requires employees to share greater volumes of personal data over the web. Cyber security will therefore become mission critical to protect this wealth of data and ensure that WFH employees are safeguarded from cyber threats. Employees will expect greater protection and want to know how companies are using the data they generate. Some technologies used to enhance productivity or to ensure remote working may be perceived as lacking in privacy standards, and these will come under increased scrutiny from authorities. As a result, countries with relaxed or non-formalised data-security legislations will be shunned by investors across the world.

The Economy and the Inequality

Even if we leave aside the pandemic, the world is generally becoming unstable for long-term planning. Several issues, ranging from political turbulence, social unrest and protests, rise of hardliner wings, environmental concerns and other similar macro-issues are beginning to show on world economy. Faced with a “synchronized slowdown” of economy, businesses suddenly find it hard to take risks to break new grounds or to expand. And COVID-19 was the last straw leading to total disruption. The situation is not going to improve soon, uncertainties are only set to increase, and this will take a toll on every industry.

More immediate and helpless is the situation of the population that is stranded at the wrong side of the great digital divide. While both education and business turn web-based, the economically handicapped section of the society will be left behind – as they lack both the device and the connectivity. It is a problem without any immediate solution. Moreover, the chasm of inequality keeps widening as frontline blue-collar workers either lose their jobs due to lockdown or risk their lives in face of the pandemic. And this is the group that is low-income and without financial back-up. In contrast, economically more stable and higher paid white-collar employees get to go about earning the same from the safety of their homes. No one knows how best to address this disparity, but fears are that it will lead to lasting stains in the social fabric.

Managing in a New World

So where do leaders stand in this transformation? With remote working becoming the norm, managers and leaders are increasingly abandoning the role of scrutinizers and becoming mentors and guides. In truth, that was how it should have been all along, but a greater part of top management took micro-managing very seriously. But now anew kind of leadership is emerging which defines objectives, sets milestones and lets teams go on autopilot. Employees would still miss the social connect they could have made at office, but that is something which will have to wait for quite some time now.

The Digital Transformation of Biology

Computer-Aided Biology is the way forward as collaboration between biology and data science is set to increase

Given the drastic changes that AI and automation have brought to almost every aspect of our existence, it should come as no surprise that the way we look at (and study) life has been completely transformed as well. The thought behind Computer-Aided Design (CAD) has slowly given rise to the field of Computer-Aided Biology (abbreviated CAB), notably in Synthetic Biology, with ensuing comparisons between the two being almost completely justified. Computer-Aided Biology is essentially an emerging ecosystem of tools that augment human capabilities in biological thought and research, almost completely redefining the way that biology is thought of and taught.

Arresting the slide

The previous challenges faced by the biopharmaceutical research industry were global and un-arrested even until a few years ago. Pharmaceutical productivity in R&D had witnessed several decades of downward slide, with some research even suggesting that by 2020, the industry-wide internal rate of return on new R&D would be 0% – making any new research unsustainable.

In order to arrest the slide in productivity, the bio-research industry is exploring new methods and technologies to help accelerate its R&D. Recent innovations in automation have facilitated a new era of open-source science, driving down sequencing costs and pushing processes like screening towards much higher data output, thereby pushing biological experimentation directly into the realm of big data. We have now reached a point where the inherent complexity of biology is finally beginning to be codified in the form of large datasets from increasingly optimised experimentation.

However, it is still worth noting that the worlds of synthetic biology and engineering hadn’t quite completely merged into a sustained positive-feedback loop. Biology is a subject with a high degree of multivariate complexity, and one can argue that the integration of technology and biology wasn’t quite hitting “the foundational level of expanding and enhancing experiments to enable effective data integration and iterative design”

Until fairly recently, most of the automation technology being used – such as liquid-handling robots or electronic ‘lab notebook’ technologies – were mostly for single-factor experiments. Programming such robots was a task that required several weeks of dedicated work. Readjusting the combination of factors for a different experiment was also highly time-consuming. The advent of an integrated CAB ecosystem is set to change this drastically.

The Digital and the Physical

Although biological and medical research over the past few years has seen rapid advances, the way research is conducted has remained static. Newer methods have simply fit into existing outlines or promoted reductionist ways of working in certain aspects. Computer-Aided Biology, however, is set to change this landscape by integrating its digital and physical domains and striving towards reimagining research methods at the source.

The Digital domain is primarily an AI-powered environment meant for the designing and simulation of modelled biological systems. Its primary functions involve collation, connection, structuring and analysis of experimental data from wet-lab experiments (e.g. CAD, CAE and PLM). The Physical aspect, on the other hand, is centred around automation, facilitating the seamless transmission of digitally simulated environments into the ‘real’ world through protocol design, logistics simulation, and execution.

Integrating the physical execution of these two aspects will allow creation of tagged, connected and structured datasets – ideal for most advanced machine learning algorithms today. It will allow for experimentation at much higher levels of complexity using digital tools to rigorously explore the dynamism in biological spaces, leading to a higher volume of data that can be analysed to produce insightful insights as well as new lines of research.

Stress on the ‘why’, not the ‘how’

The primary reason why CAB is set to be ground-breaking is the fact that it is finally replacing the ‘brute force’ approach to drug discovery and replacing it with a much more nuanced and holistic means of carrying out research. Of course, it goes without saying that a radical transformation of this magnitude will not occur overnight and will have its fair share of challenges.

The adoption of CAB and new technologies associated with it will lead to a skills-shortage within workers in the days to come. Skills such as proficiency in biological research methods will be replaced with new skills in coding and data science. Of course, this may also lead to a new levelling of skilled workers within any R&D team: (1) the ‘creative thinkers’, or proven scientific experts driving research and drug discovery, (2) the ‘technicians’, responsible for the support required for the swathes of new automation equipment and AI-technologies, and (3) ‘data scientists’, who will bridge the gap between the two. Effectively, there will be a shift in mentality among research scientists – with a renewed focus on the ‘why’ instead of the ‘how’.

The development of an aggregate integrated system is a time-intensive process which, albeit gradually, will deliver returns over much longer periods. For it to function smoothly, however, there needs to be a smoother internal transmission of design files, analytical data, environment data and other information that will require higher degrees of collaboration among individuals who will now have to speak two languages: biology and data science.

Breaking Down Barriers to Growth

Israeli start-up DeepCube is efficiently tapping the full potential of Deep Learning through software

Are AI-based models really sustainable in terms of cost? Estimates from recent research show the following: (1) the University of Washington’s Grover fake news detection model cost US$25,000 to train in about two weeks; (2) OpenAI reportedly racked up a whopping US$12 million to train its GPT-3 language model; (3) Google spent an estimated US$6,912 training BERT, a bi-directional transformer model that redefined the state of the art for 11 natural language processing tasks.

No doubt it is exhilarating to see AI researchers pushing the performance of cutting-edge models to new heights. But the costs of such processes are also rising at a dizzying rate – and even reaching the peak of its computational capabilities in many cases. This is a cause for concern indeed. Perhaps certain auxiliary software-based solutions can provide an answer.

The end of the last AI winter

Historically, the hype around AI has almost always travelled cyclically, in booms and busts. The highs of the 1970s were followed by a period of prolonged stagnation in research, leading to marked pessimism in the media and, thereby, a considerable reduction of funding throughout the 1980s. This was again followed by a meteoric rise through the dot-com bubble and a subsequent fall when the dot-com crashed in the early 2000s.

As of 2020, although the hype around artificial intelligence and its future prospects can be thought to be peaking, several research outcomes (such as very recent research from MIT) reveal that its capabilities are set to be constricted in the near future, constrained by the size and speed of algorithms and the need for costly hardware.

It goes without saying that the deep learning models of today have set new benchmarks for computer performance in a wide plethora of tasks. Its prodigious appetite for computing power, however, imposes a limit in itself – how far can it improve in its current form without being constrained by computational limits? While it is true that an explosion of computing power over the last two decades has almost certainly ended the cyclical occurrence of an AI winter, research shows that the growth trajectory will soon be arrested – especially in an era where improvements in hardware performance is slowing as well.

The likeliest impacts of these computational limits will be to either (1) force deep learning algorithms into less computationally intensive methods of improvement, or (2) pushing machine learning techniques towards a greater degree of efficiency than deep learning.

Enter the DeepCube

Israel-based start-up DeepCube is set to change the computational landscape by building the first-of-its kind “software-based inference accelerator”, which claims to drastically improve deep learning performance on existing hardware. This will increase efficiency in the deployment of deep learning-based models on intelligent edge devices – in fact, DeepCube claims to be the ‘only’ efficient technology on that front. The software is designed to run on any type of hardware, including GPUs, processors and AI accelerators – claiming a 10x improvement in speed, along with substantial memory reduction as well.

DeepCube works by producing considerably more lightweight models irrespective of whether the model is a machine learning algorithm, using a convoluted neural network (CNN) or a recurrent neural network (RNN). This is achieved through proprietary automated techniques designed by DeepCubethat are highly optimised for “running sparse deep learning models for inference”. As a result, there is“dramatic speedup and memory reduction on any existing hardware.”

Usability: Data centres, Semiconductors and the Edge

Most global data centres training large deep learning models typically require large amounts of memory and dedicated hardware (e.g. CPUs, GPUs, Edge chips, etc.). Consequently, this has led to most deep learning deployment to be limited to the Cloud. In spite of this, the attached costs and computational requirements are rather massive. To achieve decentralised processing away from the Cloud and yet maintain efficiency, DeepCube provides a solution that efficiently allows for deep learning deployment on Edge devices.

With most data centres also opting to replace CPUs with GPUs for processing, CPU providers can stay competitive by simply offering a software update on current hardware. The resultant performance is directly comparable to GPU performance – and at a fraction of the initial runtime.

Leading consultants McKinsey & Company sums up the situation succinctly in a recent report:

“The AI and (deep learning) revolution gives the semiconductor industry the greatest opportunity to generate the value that it has had in decades. Hardware can be the differentiator that determines whether leading-edge applications reach the market and grab attention. As AI advances, hardware requirements will shift for computing, memory, storage, and networking—and that will translate into different demand patterns. The best semiconductor companies will understand these trends and pursue innovations that help take AI hardware to a new level.”

With applications of DeepCube covering the entire AI deployment market, including sectors like healthcare, retail, finance and government among others, the future of software-based accelerators looks rather bright in breaking down the barriers to growth in the deep learning industry. DeepCube is a big step in the right direction.