Trends in Robotics
Fashion is a form of self-expression. Whether it’s Facebook CEO Mark Zuckerberg showing up in a hoodie for a meeting to signal he doesn’t need to dress up, someone wearing a bright red blazer to stand out from a sea of grey jackets at a job interview, or a teenager picking out a trendy outfit to wear to school to boost their popularity, what someone wears says a lot about who they are and how they want to be seen.
But how do clothing retail companies predict what trends will be popular ahead of the start of the season?
It turns out many of today’s leading fast-fashion companies, like H&M and Zara, are turning to artificial intelligence to help them predict tomorrow’s trends and stay ahead of the curve.
If you want the look for less, you want fast fashion. Examples of fast fashion include H&M, Zara, Old Navy, Urban Outfitters, Topshop, Mango, and Uniqlo. Fast fashion is, typically speaking, the trendy pieces of clothing and accessories that you can purchase inexpensively from malls across the country.
“Inexpensive” is, of course, a relative term. Many fast fashion stores have bargain-basement prices for clothing for younger shoppers seeking a look that won’t be in style beyond the season as well as higher price point looks for shoppers who are looking for outfits that are still considered affordable by the middle class but are more professional looking or made from better materials.
The word “fast” comes into play because these of-the-moment looks are often ripped from the catwalks and from celebrity culture then found on Main Street faster than you can say “basic.” This means their production cycle is compressed as much as possible so brands can capitalize on trends before their customers declare them passé.
How can a retailer predict a trend if the very nature of a trend is that it’s a short-lived fad? In the past, fast fashion brands have looked at what high fashion designers are doing during Fashion Week and emulated their looks. As well, they’ve looked to see what celebrities are wearing and even what streetwear looks like in cities like Paris and London.
Fashion labels also look to history. By and large, fashion is cyclical. What goes around, comes around. ’90s kids could be found wearing ’60s and ’70s bellbottoms along with their baggy JNCOs, and now Millennials and Gen Y are rocking the slip dresses and combat boots of the ’90s. As such, the fashion industry looks to the 20-year rule for inspiration.
They also look to current events. The hemline index theory proves that when the economy is booming, women’s skirts get shorter. Meanwhile, the red lipstick effect suggests that when the economy slumps, women look to little luxuries like new lipstick to brighten up their styles.
With the rise of social media platforms like Instagram, influencers are posting their OOTD — outfit of the day. On the one hand, it allows the fashion industry to quickly capture trends through SEO and algorithms. On the other hand, it means trends may have shorter cycles, as everyone is looking for the next big thing instead of repeating looks.
Today, though, the fast fashion industry has artificial intelligence at its disposal.
When a machine seems to have the uncanny ability to be able to make decisions, that’s essentially artificial intelligence at work.
At its broadest definition, artificial intelligence (AI) uses computer science to simulate human, or natural, intelligence in machines. Algorithms enable machines to problem solve. Not just that, the algorithms allow the machines to learn.
Along with these abilities, AI can perform automated planning and scheduling.
Artificial intelligence is transforming supply chain management.
There are a number of roles that AI can play in the supply chain. Chatbots could handle purchase requests, smart warehouses could manage inventory, and autonomous vehicles could help with shipping.
In 2019, a McKinsey survey found that most companies that use AI increased their revenue: “In supply-chain management, respondents often cite sales and demand forecasting and spend analytics as use cases that generate revenue.”
This demand forecasting, or predictive analytics, is used throughout the fashion supply chain.
In the past, fast fashion had to gather insight from across the fashion world, taking into account the colors, patterns, materials, and cuts showcased on the runways in Paris, the streets of Berlin, the red carpet in Hollywood, and the dive bars of Brooklyn. It ran the risk of the Baader-Meinhof Phenomenon, in which a fashion buyer’s brain experiences selective attention and confirmation bias. Today, computers can gather and group information much more efficiently.
And more effectively: “In a recent Gallup study centered on predicting consumer demand, data was provided on NASDAQ, product and brand searches, underemployment, and standard-of-living indices. By combining these data sources, Gallup was able to create a predictive model that outperformed their client’s previous consumer demand model by more than 150%.”
The Swedish fashion empire H&M employs more than 200 data scientists to predict and analyze trends. Its AI algorithms obtain fashion trend data by capturing information on search engines and blogs. This information informs everything from how much they buy, when they buy, and where it should be placed in its stores.
Importantly, AI not only forecasts new trends the company’s buyers should be aware of but also informs them of whether they should restock currently popular merchandise. As Thomas has previously reported, H&M’s artificial intelligence therefore helps the company reduce waste and make more sustainable decisions.
Head of the H&M Club Samuel Holst said, “Knowing our customers — having this insight, knowing where, how and when they shop, knowing what they like — that is an important piece in how we will be able to predict trends.”
In 2019, then-CEO Karl-Johan Persson reported that H&M’s venture into artificial intelligence had already helped the company predict trends.
Like its competitor, the Spanish fashion outfitter Zara has turned to artificial intelligence to further its goals. The company uses it in a number of ways, including employing AI robots to fetch requested orders for customers using its Buy Online, Pick-Up in Store (BOPIS) or Click and Collect options.
Unlike H&M, which relies heavily on outsourcing to speed its production, Zara’s use of outsourcing is minimal. As Thomas has previously reported, one of the advantages of this tactic is that “Zara controls everything from design to display to shipping, allowing it to gather valuable data at every stage. This data can then be analyzed to identify inefficiencies, pinpoint areas of success, and create accurate forecasting.”
To further augment this, the company has also hired Tyco to install microchips into its clothing’s security tags so that they can identify where within the supply chain a particular style and size is located. This allows the company to have full visibility over the inventory it can sell, thereby informing its forecasting analytics.
Zara also has an initiative with Jetlore, a consumer behavior prediction platform that uses artificial intelligence. Founded by computer scientists from Stanford in 2011, the company was bought by PayPal in 2018. Crunchbase explains its use in fast fashion: “Jetlore’s AI-powered platform maps consumer behavior into structured predictive attributes, like size, color, fit, or style preferences, making it the only customer data platform for B2C businesses in the market. This structured data allows top tier retailers and large hospitality and media companies to optimize content and communication for the consumers, make better merchandising decisions, optimize search, and empower the next generation of customer service.”
[For more information, read Thomas’ “How the Zara Supply Chain Taps into Top Clothing, Retail Trends”]
The irony is that artificial intelligence is doing more than just predicting trends. It’s also influencing them.
AI tracks shoppers’ behavior patterns online so it can offer a personalized shopping experience. Machine learning gathers data on what a shopper likes and when and how often they make certain purchases. Its predictive technology then allows it to anticipate a shopper’s needs and desires so it can offer them similar products. It even learns when online shoppers are more likely to be open to trying out a new brand.
In this way, AI can be used to introduce shoppers to new brands and styles, push bulging inventory, and drive trends.
There’s a lot of efficiency and innovation at the root of automotive manufacturing, but one process that’s still in need of advancement is painting.
Part of that is due to the fact that paint chemicals can be damaging to both humans and the environment — but there’s also the problem of overspray. Spray paint has a way of getting in places where it doesn’t belong, the results of which means wasted paint and rework.
Well, one of the world’s top luxury automakers has pioneered a new method it believes will revolutionize its vehicle painting process.
BMW recently initiated a pilot project that uses automation to apply paint in a way that’s “overspray-free.”
The new EcoPaintJet Pro application process, which is now being used at BMW’s Dingolfing plant in Germany, is said to be so precise that designs can be applied without stencils or masking the vehicle. BMW, who developed the process along with leader in automated paint, Durr Group, says EcoPaintJet Pro "not only increases the degree of individualisation for customers, but also contributes to sustainability by reducing waste and energy consumption."
The pilot is tackling 19 BMW M4 Coupés with custom two-tone paintwork, as well as an "M4" logo emblazoned on the tailgate and bonnet. The automaker says the new process, which skips traditional electrostatic adherence in favor of jet application, will allow for "virtually limitless options for individualization" and the crisp lines that come from the robotic precision make masking a thing of the past.
Ford Motor Company has announced several electric vehicle investment projects so far this year, but the automaker’s latest is on a whole other level.
On Sept. 27, the company unveiled plans for an $11.4 billion project that is expected to create nearly 11,000 new jobs.
In what Ford says is “the largest ever U.S. investment in electric vehicles at one time by any automotive manufacturer,” the company plans to build two major manufacturing campuses at sites in Tennessee and Kentucky that will support demand for Ford’s F-150 Lightning truck, E-Transit, and Mustang Mach-E EVs, along with the batteries that will power them.
In Stanton, Tennessee, Ford will build a $5.6 billion production campus it has dubbed “Blue Oval City.” The 3,600-acre hub will serve as an assembly complex for the company’s electric F-Series vehicles. The campus, set to be carbon-neutral, is expected to house about 6,000 employees.
In central Kentucky, Ford and its South Korea-based EV battery partner, SK Innovation, will establish the 1,500-acre BlueOvalSK Battery Park, a $5.8 billion investment that will create twin battery factories in the Louisville suburb of Glendale. About 5,000 employees will staff those facilities, which are expected to open in 2025.
Overall, Ford said its share of the total investment would be approximately $7 billion, and that the project is part of the company’s plans for more than $30 billion of investment in EVs by 2025. This past summer, Ford said it forecasts that 40% to 50% of its worldwide vehicle sales will be from fully electric vehicles by 2030.
“This is our moment — our biggest investment ever — to help build a better future for America,” Ford President and CEO Jim Farley said in a news release. “We are moving now to deliver breakthrough electric vehicles for the many rather than the few. It’s about creating good jobs that support American families, an ultra-efficient, carbon-neutral manufacturing system, and a growing business that delivers value for communities, dealers, and shareholders.”
Ford’s announcement also said the company would invest $90 million in Texas to boost job training and career readiness for auto technicians. It’s part of a broader $525 million investment across the U.S. to empower a pipeline of EV technicians.
Cruise and Waymo earned autonomous vehicle permits to provide ride services to passengers in California. Reuters reports they are the first of the self-driving companies to receive such permits.
Cruise, of Alphabet Inc., will be able to offer autonomous rides in certain areas of San Francisco. Per the California Department of Motor Vehicles, Cruise’s rides can function between the times of 10 p.m. and 6 a.m. at a speed of no more than 30 miles per hour.
Waymo, of General Motors Co., will utilize its autonomous vehicles, but a safety driver will be in the car. The presence of the safety driver allows Waymo’s cars to operate on public roads in areas of San Francisco and San Mateo counties. Its speed limit cannot exceed 65 miles per hour.
The acquisition of the permit allows Waymo to expand from suburban areas in Arizona where it was already offering paid, driverless rides.
Cruise had been testing in San Francisco with General Motors’ Bolt EVs and Waymo used the all-electric Jaguar I-Pace SUVs.
In order to charge customers for the self-driving rides in California, the companies will need another permit according to the DMV. This permit would come from the California Public Utilities Commission.
Cameras are critical components of the drive to enable the autonomy and increase the safety of vehicles, drones, and robots. Near-infrared (NIR) cameras are rapidly enhancing machine-vision capabilities and are essential to a range of inspection applications. Israeli startup Unispectral says it has developed a solution that includes a miniature tunable NIR filter and image-processing software to turn any low-cost IR camera into a hyperspectral camera.
Promising startups pop up every day, and it is not always easy to spot them, especially when they emerge several thousand kilometers away. Alissa Fitzgerald, founder of microelectromechanical-system design and development house A.M. Fitzgerald and Associates, brought Unispectral (Tel Aviv) to EE Times Europe’s attention. The startup was noteworthy, Fitzgerald said, because it had “released a spectral camera suitable for the mass market, and the MEMS technology makes this product smartphone-sized instead of a tabletop instrument.” An interview with Ariel Raz, Unispectral co-founder and CEO, soon followed.
Admittedly, there are plenty of spectral cameras today, but they are large, complex, expensive and suitable for high-end equipment in labs. Unispectral’s goal has been to develop high-end spectral cameras accessible for the mass market.
Human color vision is trichromatic. Every color we see is the product of signals generated by solely three types of photoreceptor cells in the retina. Our vision is thus organized into — and limited to — a three-dimensional color space.
Now imagine a device, such as a smartphone, that could extend human vision into a high-dimensional color space.
Think of all the hidden information that could surface and play a critical role in our daily lives. One way of doing that is hyperspectral imaging.
Unispectral says it has developed a new concept of a tunable Fabry–Pérot NIR filter. Its design mounts an array of vapor-coated mirrors on a MEMS assembly. With controlled changes of the voltage applied on the upper mirror holder, the optical cavity changes to allow only a desired IR wavelength to pass.
Peleg Levin, CTO of Unispectral, explained the concept at an IEEE MEMS Conference. “In our design, we have a movable mirror that has one set of electrodes and another set of electrodes that are exterior to the optical gap,” said Levin.
“When we apply the actuation voltage, the optical gap increases instead of decreasing, and since we can design this electrostatic gap to be much greater than the optical gap, we can allow a very large tunable range of the optical gap itself.”
The filter is manufactured on a full wafer-level technology to provide a component ready for mounting and integrating with the camera assembly and device controllers, according to the company.
As the camera megapixel race was nearing its end, Tel Aviv University EE professor David Mendlovic and Raz, who was then his doctoral student, realized that the combination of spectral analysis and imaging would “create something very powerful,” Raz told EE Times Europe.
In 2016, the researchers patented an optical component based on existing MEMS technology and established Unispectral. Four years later, the company announced the availability of an evaluation kit for its tunable NIR filter, now named ColorIR, which turns low-cost IR cameras into 700- to 950-nm spectral cameras, according to the startup.
More recently, Unispectral introduced the Monarch spectral IR camera, which integrates its tunable Fabry–Pérot filter with a miniature IR camera module in a 60 × 40 × 14.5-mm, 30-gram camera. The unit connects via a USB cable to an Android smartphone, a PC, or the main processor of an OEM platform.
“Developers in many industries can integrate Monarch in their products because it is already a spectral camera with some on-board processing, easy connectivity, and an easy-to-use API,” said Raz. “It can fit many platforms without going through the very tedious development cycle of the camera.”
Raz said spectral imaging drives better AI. The optical MEMS component acts as a tunable filter, and the software — an image- fusion library — supports the component and extracts all the relevant information from the image. “By combining our optical MEMS component and algorithms, the camera makes the transition from seeing to sensing,” he said.
Customers are currently evaluating Monarch in different industries and geographic regions, said Raz, adding that Unispectral has manufacturing partners to “support scale.”
Unispectral claims its technology can be applied in any area of human activity, from agricultural inspection to industrial quality control, facial authentication, computer vision, vehicle safety, and health monitoring. With its NIR tunable filter and NIR spectral camera, Unispectral aims to lower the barrier to entry for camera-based applications and use cases.
“It can be integrated in any platform that needs to address the question, ‘What do I see?’ and act on the answer,” said Raz.
One initial target has been smartphone front-camera facial authentication. Because of the limited space on the surface of a smartphone, sensors cannot be indefinitely added. A spectral camera can perform eye recognition, eye scanning, and face mapping all in the same unit, according to Unispectral. The company says that NIR facial authentication guarantees a higher level of security and robustness than visible spectrum authentication and 3D solutions.
In the medical domain, the integration of Unispectral’s NIR filter can help monitor vital signs, perform vein detection, and support contactless examination and diagnosis. A diabetic patient with an open wound, for example, could send a photo of the wound and a current blood-oxygenation reading to a doctor for remote evaluation and advice on treatment.
Precision agriculture is another potential application. A raft of sensors are already being used to measure soil humidity/moisture levels and soil/air temperatures, but Unispectral says its Monarch NIR portable camera promises a simpler way to monitor plant health and nutrients, detect pests and pesticide residues, and diagnose plant diseases.
Monarch captures detailed frames in the 700- to 950-nm NIR spectral range and measures large samples to provide context.
Asked whether Unispectral’s camera could help farmers transition from curative to preventive agriculture, Raz said the company has recently completed a proof of concept for the early detection of contamination in plants before the damage is perceptible to the human eye. In a greenhouse, such early detection would allow the farmer to remove the affected plant and prevent contamination of the whole crop. Unispectral has also developed a proof of concept for the detection of insufficient fertilizer application, which would give farmers an early indication of a mechanical malfunction or other problem.
Farming is a 24/7 job, and it’s critical to make timely decisions. If farmers miss the perfect planting window in their geographic area, the result is a lower crop yield. When farmers must send samples to external labs and then wait days to get the results, as is generally the case today, precious time is lost. “By the time they get [the information back], there is nothing they can do; it’s too late,” said Raz. Having the lab “in the palm of your hand is a game changer,” letting farmers instantly access meaningful insights and make real-time decisions in the field.
At the time of its inception, Unispectral raised US$7.5 million in a round led by Jerusalem Venture Partners, Robert Bosch Venture Capital, Samsung Catalyst Fund, and the Tel Aviv University Technology Innovation Momentum Fund.
“We established the company, built a team, and it took us years to develop a new type of MEMS component,” said Raz.
“It’s not like we took it off the shelf and optimized it. We overcame a lot of technical challenges. It was a long and complicated journey.
“The fun part starts now,” he continued. “We have developed the technology in the lab and now [can] start seeing how it really improves day-to-day life and how customers can integrate it into their products and gain a competitive edge.”
Unispectral, which employed 15 people when this article waswritten, is expanding into international markets. In addition to its Tel Aviv headquarters, it has established a subsidiary in China. It also has sales representatives in Germany, South Korea, and Hong Kong, and discussions are under way with distributors in the U.S. and U.K.
The world’s largest retailer is partnering with an automotive giant and its autonomous technology affiliate to roll out a self-driving delivery service in three U.S. cities.
Ford and Walmart announced that the automaker’s test vehicles, equipped with Argo AI’s self-driving system, would deliver groceries and other popular Walmart products directly to the homes of online shoppers in the Miami, Austin, Texas, and Washington, D.C., metro areas.
Argo and Walmart will integrate their software platforms to route packages and schedule deliveries. Initial integration testing is scheduled to begin later this year.
The companies will initially restrict the service to defined areas in those three markets — where Ford and Argo are already testing self-driving technology — before expanding the service areas over time.
Company officials said the partnership shows the potential for autonomous delivery services at scale, particularly in high-demand urban corridors and as customers increasingly acclimate to same-day or next-day delivery.
The announcement marks Walmart’s first multi-city autonomous delivery collaboration in the U.S., and follows a test with Ford in Miami three years ago. The retailer also previously partnered with self-driving startups Gatik and Nuro, as well as General Motors autonomous subsidiary Cruise.
Ford, which owns a stake in Argo AI along with Volkswagen, is also testing self-driving vehicles in Detroit, Pittsburgh, and Silicon Valley.
Power grids around the world are facing similar challenges. One of the biggest is the rise in renewable energy generation of all kinds; solar and wind energy are great for the planet, but they are as unpredictable as the weather.
Schemes designed to encourage consumers to put solar panels on roofs and use electric vehicles to store energy mean the grid is morphing from unidirectional to bidirectional. And instead of demand prediction, utilities now need to predict both supply and demand in real time, at very fine levels of granularity.
“The ability to add AI into the mix and do real-time analytics at the edge is going to be critical for increasing the amount of distributed energy resources that can come online,” Marc Spieler, global business development and technology executive for the energy sector at Nvidia, told EE Times.
Spieler pointed out that great work is being done in wind, solar, and EVs, but if the grid doesn’t have the ability to support those applications, the effort is wasted.
Demand prediction draws on many complicated factors. Aside from the weather, real-time prediction might include complex tasks such as anticipating how many electric trucks will arrive at which filling station and require battery charging at what exact moment, for example.
“It’s going to come down to hour-by-hour–, minute-by-minute–type decisions,” he said. “And AI is the only thing that’s going to allow that to become efficient.”
Large-scale prediction Utilities typically subscribe to detailed weather-prediction services today, feeding this data into complex models to try to predict energy demand.
“The people doing this best are probably the financial services companies, the hedge funds that are buying and selling power,” Spieler said. “Those guys are making huge investments in AI, and they’re capitalizing on the profit.”
However, Spieler said, utilities are upping their game.
“We are seeing a ramp-up of data science in the utility,” he said. “Some of the [utilities] we’re working with are ramping up their data science communities. We’re starting to sell hardware DGX systems [Nvidia data-center–class AI accelerators] into utilities for the first time.”
Among the techniques available to utilities for AI at scale is federated learning — a technique in which a central model may be trained using data from multiple sources, without the data having to be centralized or shared. This is often used in the health-care sector for medical AI models, as having access to more training data can make the models more accurate, but the data cannot leave hospital premises. In essence, local versions of the model are retrained at the edge, and then updates to the model parameters are centralized to make the overall model better.
Nvidia has a platform for federated learning called Clara. Large-scale demand and supply prediction models for the electricity grid would be an interesting use case for Clara, said Spieler.
“[Utilities] can’t share their data, but they aren’t exactly competitors either, as there’s only one set of power lines going to your house,” he said. “We believe we can use federated learning to get the entire industry working together by training their models and sharing the model weights with larger organizations that can consolidate those.”
This could enable more accurate models to predict the grid’s response to unusual weather conditions — for example, a model deployed in a desert state could be trained partly with data from farther north that would include more instances of those particular conditions.
The grid of the future will also make use of AI at the edge. The “smart meter” of 10 years ago will get smarter as the use case shifts further from replacing human meter readers to taking more of a role in predicting consumer demand and supply from solar panels and EVs using AI.
According to Spieler, today’s smart meters use very little of the data to which they have access. A typical meter might have eight channels of data available, while downstream devices such as smart thermostats might be collecting as many as 20 or 30 channels of data.
“Every smart meter today has a chip in it,” he said. “The question is, will it be powerful enough to process the amount of data? We envision the smart meter could become like an iPhone: It captures a ton of data, and then utilities, consumers, and others can apply applications on top of that to optimize energy efficiency.”
In one scenario, if a substation went down, smart meters could provide the necessary data to create a neighborhood microgrid, which could share energy from solar or EV batteries among neighbors. In the event of extreme weather, AI-enabled smart meters could also potentially be used to switch off power to non-essential loads as a kind of smart load-shedding scheme. During last winter’s power grid crisis in Texas, for example, power to Houston’s pool pumps could have been shut off to preserve supply to homes that were running life-preserving medical equipment, Spieler said.
“[We could] take energy consumption off the grid in a surgical way,” he said. “Today, it’s done purely by turning [whole cities] on and off, but in the future, you could make a decision based on, say, the temperature of somebody’s home.”
During a cold snap, smart meters could reveal which homes were approaching 0°C, and then the grid could prioritize their power supply so the pipes wouldn’t freeze.
“AI is going to provide this level of visibility,” he said. “The data exists — we’re seeing that with Nest thermostats and other things behind the meter — but that data doesn’t get back to the utilities to make decisions in a better way.”
Veritone is one of several companies developing AI solutions for grid management. Veritone’s Cooperative Distributed Inferencing (CDI) technology is designed to ensure predictable energy distribution and resilience across the grid. The system uses forecast data and rules to build and continually update device state models, which are then used to intelligently control edge devices.
“No human could properly control the grid,” Sean McEvoy, senior vice president for energy at Veritone, told EE Times. Analysis of massive volumes of data is required to continuously monitor the state of every energy-generating and -storing device on the grid, as well as energy demand, weather patterns, transmission flow, and energy prices, he said.
Only AI is capable of doing all of this.
“Continuous real-time modeling of this tsunami of data delivers the intelligence to constantly know how much energy every grid participant needs, and how much energy can be delivered, at any given moment or in the near future,” McEvoy said. “Not only can no human do this, even massive computer power alone is not enough. It demands edge compute power combined with intelligent reinforcement and adaptation learning.”
Reinforcement learning is a technique in which an AI agent (an AI algorithm that takes some kind of action) is trained to maximize some notion of reward. This enables the agent to effectively learn from the consequences of its actions rather than specifically being taught. And the adaptive nature of Veritone’s algorithm means it is constantly updating its model of the system as it evolves with changing conditions in real time.
McEvoy explained further that Veritone’s AI model creation for the grid uses a “distributed, constrained Hamiltonian” approach, which means that inference is done at the edge to improve latency. The models can be constrained by rules, such as device warranties, or rules handed down by the North American Electric Reliability Corporation (NERC) or the Federal Energy Regulatory Commission (FERC). Mean-field algorithms are used for model synchronization, and linear algebraic models are used for forecasting. Veritone’s simulator uses Monte Carlo techniques to model the probability of different outcomes.
An overview of Veritone’s AI solution for the electricity grid is shown in the bottom figure on the previous page. Information about domain rules is converted into parameters by the rule translator. The tomograph learns and updates a model of the device under control, the optimizer continuously creates a policy that satisfies operational and behavioral goals, and the mean field synchronizes CDI agents by projecting states of the entire network onto local agents. The edge controller controls the edge device, the blackboard provides a flow of information, and the forecaster uses past and current sensor data to forecast future energy factors, including demand.
Veritone targets utilities and independent power producers such as microgrid developers and operators. It also works with equipment providers to develop and deploy AI models and predictive controllers for resources such as solar inverters and battery systems.
“The software can be deployed centrally — at substations or data centers — or at the edge, to control edge devices and provide edge inferencing,” McEvoy said. “Real-time synchronization of multiple grid edge devices provides a holistic model view of the plant state and capacity.”
On what scale will these AI technologies be rolled out, and does it make sense to start relatively small, perhaps with microgrids? Or are there downsides to starting out piecemeal?
“Generally, Veritone recommends starting piecemeal by controlling a single site’s energy resources,” McEvoy said.
Rolling out AI technology to smaller sites first — perhaps 25–50 MW of renewable energy generators and storage — can provide confidence to grid operators.
“Once that site is optimized with AI, then it can expand out to multiple sites and the AI can synchronize at the site level,” he said. “The technology can manage and control nanogrids, microgrids, and the broader utility grid, but complexity grows exponentially as you scale up.”
Espoo, Finland – Nokia today announced it has launched the industry’s first cloud-native, mission-critical industrial edge solution to allow enterprises to accelerate their *operational technology (OT) digitalization initiatives and advance their journey to Industry 4.0. The new Nokia MX Industrial Edge is a scalable application and compute solution designed to meet the mission-critical digital transformation needs of asset-intensive industries such as manufacturing, energy, and transportation. It uniquely combines compute, storage, wired/wireless networking, one-click industrial applications and automated management onto a unified, on-premise OT digital transformation platform.
Industry 4.0 requires widespread digitalization and connectivity of equipment, machines and other assets in industrial environments. Due to the volume and velocity of data generated, and the need for real-time automation, increasingly data needs to be processed at the edge – close to where it is generated. By 2025, Gartner predicts that 75% of industrial data will be processed at the edge.
By adopting the Nokia MX Industrial Edge, enterprises will benefit from an on-premise cloud architecture that unifies edge requirements in an easy-to-use, deploy everywhere, as-a-service package. It removes the complexity, knowledge, and economic hurdles typically associated with deployment, integration and life cycle management of high-performance compute applications and mission-critical networking.
The platform’s extreme scalability enables multi-facility enterprises, such as logistics companies, to deploy the same technology in all their locations, whether large or small, making the benefits of ‘develop-once, deploy-everywhere’ a reality. The Nokia MX Industrial Edge is powered by the Nokia Digital Automation Cloud (Nokia DAC), providing the enterprise a single pane of glass user experience to manage everything from applications to private wireless networking.
Stephan Litjens, Head of Enterprise Solutions at Nokia, said: "Industry 4.0 is transforming asset-intensive industries by integrating and digitalizing all processes and systems across the industrial value chain. This will result in an explosion of data – and taking the right actions based on that data in near real-time will be critical to the success of digital transformation initiatives. Ensuring performance, along with aspects like keeping data local and secure while being resilient against internet connectivity failures, are not possible with a centralized cloud, making the on-prem edge the architecture of choice for this new breed of OT applications. The Nokia MX Industrial Edge is built from the ground up to deliver the guaranteed performance, security and reliability that OT digitalization use cases require."
Caroline Chappell, Research Director, Analysys Mason, said: “Enterprises are increasingly focusing their digital transformation efforts on the application of software and cloud capabilities to operational technologies (OT) to reap the benefits of agility and cost-efficiency in asset-intensive industrial environments. Enterprises need on-premise edge clouds, like the Nokia MX Industrial Edge, to provide secure, resilient, and high-performance execution environments for mission-critical OT applications. Enterprises will be looking for an edge cloud solution partner that can tap into a broad ecosystem of cloud stack and industrial application vendors, understand their stringent operational needs, data sovereignty requirements, and which can bring a deep knowledge of the network to an increasingly complex, connected industrial landscape."
The Nokia edge solution is also available for enterprises in combination with Nokia Digital Automation Cloud applications, such as High Accuracy Indoor Positioning, Plug and Play private wireless, or with Nokia's alternative private wireless solution, Nokia Modular Private Wireless (MPW). Flexible consumption-based pricing models provide 'pay as you grow' subscription flexibility while minimizing upfront investments.
Nokia MX Industrial Edge comes in a variety of configurations to support small, medium, and large-scale industrial deployments. Based on the Nokia AirFrame Open Edge server, leveraging Intel’s latest innovations and CPU for high-capacity processing, the Nokia edge solution is designed for compute-intensive tasks and advanced AI/Machine Learning (ML) workloads optionally supported through graphic processing unit (GPU) support. In addition, high-performance network interface cards (NICs) and packet processing systems scale to support very large 5G standalone (SA) private wireless traffic flows. It offers extreme resilience and reliability through an end-to-end, high availability (HA) architecture, supports geographical redundancy (GR) for business continuity, and the guaranteed performance is assured via integrated orchestration features for service performance management.
Caroline Chan, vice president, Network Platforms Group and GM, Network Business Incubation Division, Intel “Nokia and Intel have a long-standing partnership to provide innovative solutions from the core to the edge of the intelligent network, which Nokia is expanding to accelerate Industry 4.0 adoption. The combination of Intel’s innovations and CPUs alongside Nokia’s MX Industrial Edge Platform and 5G technology will offer enterprise customers the ability to connect, deploy and manage their environments. Across different verticals, enterprises will benefit from the scalable performance and high-speed, low-latency reliable communications.”
Enterprise customers can accelerate their OT digitalization initiatives with the one-click deployment of industrial applications on the MX Industrial Edge. The catalog contains Nokia applications for a variety of common digitalization use cases and is complemented by applications from independent software vendors (ISV) which undergo an onboarding process testing reliability, performance, and security. In addition, Nokia's ecosystem-neutral approach also enables industrial customers to take advantage of edge cloud compatible applications present in hyperscale clouds, as well as industrial partner clouds.
Finally, the MX Industrial Edge also simplifies southbound IIoT system integration complexity with its Industrial connectors providing industrial data protocols translation, and with the Nokia Integrated Operations Center that provides a single pane of glass view from all systems as well as helping create industrial automation workflows.
Litjens added: “All industrial and enterprise campuses, such as factories, logistics hubs, ports, etc. are multi-solution and multi-partner environments. By adopting an ecosystem-neutral approach and integration plug-ins, our customers get unparalleled flexibility and benefit from the widest array of applications and use cases to adopt innovations to advance their digital transformation.”
*Operational Technology (OT) provides the technology to monitor and control physical processes, systems, devices, and infrastructure in a production environment, i.e. it provides the means to 'control the physical with the digital'.
Barely a day goes by without a news headline heralding developments in artificial intelligence. Tantalising as each story may sound, the scientific community is likely to be more drawn to the backstory. The countless hours of hard work behind the scenes and, behind that, the strong belief – backed by political will and investment – that AI can transform science and society.
In its proposed regulation, announced on 21 April, the Commission spells out that ‘trustworthy AI’ means safeguarding freedoms and safety while encouraging innovation, investment and commercial uptake. Ethical concerns are also raised about the way AI is developed, how data is obtained and ‘trained’ during machine-learning processes to avoid bias, and how the information is used.
As the Commission’s proposal is debated in the EU’s law-making chambers, questions about AI compliance and strategies for enforcing it are also being explored. Issues about its use in law-enforcement (e.g. facial recognition systems), credit scoring and insurance risk, as well as its potential abuses (e.g. deep fakes, scams, subverting justice and democracy) are also high on the agenda.
The EU naturally wants to capitalise on the benefits while ensuring that suitable checks and balances (standards and agreements) are in place to guide developments. Many of these challenges and opportunities are presented in a 2020 White Paper, ‘Artificial intelligence – A European approach to excellence and trust’.
World-class AI research (and funding)
Europe is betting big on artificial intelligence. There are good grounds for such optimism. The global market for AI, which includes software, hardware and services, is forecast to grow by 16.4% to $327.5 billion in 2021 and push through the $500 billion mark by 2024 thanks to a five-year compound annual growth rate (CAGR) of 17.5%, according to IDC data.
Reaching this potential is going to take a lot of strategic planning and hard work. According to the Commission, Europe needs to increase and better coordinate public and private investment to “reap the full benefits of AI” and strengthen its position in this key enabling technology. It is why digital technology and AI feature prominently in EU research programmes and initiatives. These include Horizon Europe, the main R&I funding programme supporting technological and societal aspects of AI development and deployment, and European Research Council grants to simulate AI-focused research centres and leadership across the EU, and beyond.
Other AI initiatives include European Innovation Council funding to help promising innovators and SMEs turn research into breakthrough innovations, and European Partnerships bringing private and public R&I partners together to tackle pressing societal challenges. For example, the AI, Data and Robotics Partnership is looking for cross-fertilisation between partners from the digital and space sectors/industries, thus driving development and uptake of new technologies.
Public-private partnerships (PPP) are another avenue to advance AI in and with Europe. One AI-PPP is being set up to boost “value-driven trustworthy AI, data and robotics based on European fundamental rights, principles and values”. It brings together a range of initiatives (EurAI, CLAIRE, ELLIS, BVDV and euRobotics) covering different aspects of big data, intelligent systems, machine-learning, etc.
More details about EU projects, results and publications, including a handy CORDIS Results Pack on how AI is “turbocharging European industry”, can be found on the Commission’s dedicated AI research web-page.
Better understanding of ‘trust’
“With all of the attention on machine learning, many are seeking a better understanding of this hot topic and the benefits that it could provide to their organisations,” notes SAS, a software and analytics business, in a helpful primer/briefing on ethics and AI. This is also true of the international research community, which is both heavily invested in the science behind AI but also in what it can do for their field.
Artificial intelligence and related fields are driving innovation in countless areas, from language processing and security applications to image analysis, medicine, self-driving vehicles, personalised marketing, e-commerce, and much more.
According to a 2020 report on AI research and innovation, ‘Europe paving its own way’, the EU ranks among global leaders in AI science and it has actively supported ethical and human-centric progress, but its innovation performance in the field needs a boost.
Effort in the coming years is thus focused on developing and deploying AI solutions with positive impacts on society and the economy, while prioritising public and private investment including better access to and use of scientific data.
Focus is also needed on extending trustworthy AI and “ethics by design” in Horizon Europe R&I projects. The impact of such determination could “bring about significant improvements to society”, the Commission notes, delivering high-impact innovations in healthcare, education, transport, industry, climate action, and many other sectors.
On the flipside, as AI becomes more pervasive, it will bring about considerable socio-economic changes which need discussing, according to the Commission. This is why it has launched a consultation (Europe’s Digital Decade) to explore these implications as it forges new laws governing AI developments: “The EU must act as one, based on European values, to promote the development and deployment of AI.”
To help translate the EU’s digital ambitions into concrete goals with built-in monitoring and reporting milestones to reach by 2030, the Commission came up with a so-called Digital Compass revolving around four “cardinal points” outlined briefly here:
Digital skills (at least 80% of adults and a much higher proportion of women should have digital skills to reach a target of 20 million ICT specialists employed in the EU)
Well-functioning, secure and sustainable digital infrastructures (all EU households should have gigabit connectivity and all populated areas should be covered by 5G)
Digital transformation of businesses (75% of companies should use cloud computing services, big data and AI)
Digitalisation of public services (all key public services should be available online including secured access to e-medical records and eID solutions)
The pandemic has shown the importance of digital technologies and skills, and highlighted where improvements are still needed. In a prepared statement about Europe’s post-Covid digital ambitions, Commission President Ursula von der Leyen said, “We must now make this Europe’s Digital Decade so that all citizens and businesses can access the very best the digital world can offer,” and concluded that the new Digital Compass “gives us a clear view of how to get there.
The EU-funded EU-Japan.AI project will develop a platform-based approach to connect relevant stakeholders from the EU and Japan and support knowledge exchange on innovative AI applications for manufacturing.
Search and Navigate:
Call or Email Us:
Address:Room N3-02C-96, School of Mechanical
& Aerospace Engineering, Nanyang Technological University, Singapore 639798