RT/ Spider-inspired, shape-changing robot now even smaller

November 16th 2023

Robotics biweekly vol.85, 2nd November — 16th November

article image

TL;DR

  • Shape-changing robots just got a lot smaller. In a new study, engineers at the University of Colorado Boulder debuted mCLARI, a 2-centimeter-long modular robot that can passively change its shape to squeeze through narrow gaps in multiple directions. It weighs less than a gram but can support over three times its body weight as an additional payload.
  • The paper, published in the journal Opto-Electronic Advances, introduces a new strategy in the construction of 3D force sensors through the combination of knots and optical fibers. By analyzing the structure mechanics of the knot, researchers have successfully improved the sensing performance for pressure, friction and slips. This approach simplifies the system complexity and overcomes computational challenges associated with signal decoupling.
  • Inspired by humans’ ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (F3RM), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.
  • Researchers have made significant strides in biomaterial technology and rehabilitation medicine. They’ve developed a novel approach to healing muscle injury by employing ‘injectable tissue prosthesis’ in the form of conductive hydrogels and combining it with a robot-assisted rehabilitation system.
  • Scientists have developed a souped-up telepresence robot that responds automatically and in real-time to a remote user’s movements and gestures made in virtual reality.
  • A computer scientist has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. The researcher applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms and robotic applications.
  • Smart, stretchable and highly sensitive, a new soft sensor opens the door to a wide range of applications in robotics and prosthetics. When applied to the surface of a prosthetic arm or a robotic limb, the sensor skin provides touch sensitivity and dexterity, enabling tasks that can be difficult for machines such as picking up a piece of soft fruit. The sensor is also soft to the touch, like human skin, which helps make human interactions safer and more lifelike.
  • A new study provides evidence that pigeons tackle some problems just as artificial intelligence would — allowing them to solve difficult tasks that would vex humans.
  • Voice speed and interaction style may determine whether a user sees a digital assistant like Alexa or Siri as a helpful partner or something to control, according to researchers. The findings reveal insights into the parasocial, or one-sided, relationships that people can form with digital assistants, according to the researchers.
  • AI and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says an academic. An expert argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence — posing a threat to national security.

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

mCLARI: a shape-morphing insect-scale robot capable of omnidirectional terrain-adaptive locomotion in laterally confined spaces

by Heiko Kabutz et al in arXiv

This shape-changing robot just got a lot smaller. In a new study, engineers at the University of Colorado Boulder debuted mCLARI, a 2-centimeter-long modular robot that can passively change its shape to squeeze through narrow gaps in multiple directions. It weighs less than a gram but can support over three times its body weight as an additional payload.

 

The research was led by Kaushik Jayaram, assistant professor in the Paul M. Rady Department of Mechanical Engineering, and doctoral student Heiko Kabutz. Their work recently won the Best Paper Award on Safety, Security, and Rescue Robotics at the 2023 International Conference on Intelligent Robots and Systems in Detroit.

The paper improves on the team’s previous miniature shape-morphing robot, called CLARI, by making it smaller and faster. The robot can maneuver expertly in cluttered environments by switching from running forward to side-to-side, not by turning but changing its shape, giving it the potential to aid first responders after major disasters. The research brings Jayaram’s group one step closer to realizing insect-scale robots that can move seamlessly in natural terrains similar to their animal counterparts — largely by combining a soft robot’s adaptability with a rigid robot’s agility.

Their latest version is scaled down 60% in length and 38% in mass, while maintaining 80% of the actuation power relative to its predecessor. The robot is also more than three times as fast as its predecessor, reaching running speeds of 60 millimeters per second, or three of its body lengths per second. Additionally, similar to CLARI, the robot is capable of locomoting in a variety of shapes and running at multiple frequencies using multiple gaits.

This latest breakthrough in miniaturization is enabled by the origami-based design and laminate fabrication technique Jayaram and his colleagues previously used to make a robot called HAMR-Jr. Using this novel approach, Jayaram and Kabutz are able to scale down (or up) their design without sacrificing mechanical dexterity, bringing such robots closer in size to real-world application needs. That includes inspecting and maintaining valuable assets like jet engines.

Kabutz, the lead author of the new study, has surgeon-like hands that allow him to build and fold the tiny leg modules of the robot. Kabutz grew up fascinated by robots and competed in robotic competitions in high school.

“Initially, I was interested in building bigger robots,” said Kabutz, “but when I came to Jayaram’s lab, he really got me interested in building bioinspired robots at the insect scale.”

“Since these robots can deform, you can still have slightly larger sizes,” Jayaram said. “If you have a slightly more size, you can carry more weight. You can have more sensors. You’ll have a longer lifetime and be more stable. But when you need to, you can squish through and go into those specific gaps, such as inspection access ports in a jet engine.”

Jayaram’s research team studies concepts from biology and applies them to the design of real-world engineered systems. In his lab, you can find robots modeled after the body morphologies of various arthropods, including cockroaches and spiders.

“We are fundamentally interested in understanding why animals are the way they are and move the way they do,” said Jayaram, “and how we can build bioinspired robots that can address social needs, like search and rescue, environmental monitoring or even use them during surgery.”

 

Knot-inspired optical sensors for slip detection and friction measurement in dexterous robotic manipulation

by Jing Pan et al in Opto-Electronic Advances

Hands possess an awe-inspiring ability to perceive friction forces with remarkable accuracy, all thanks to the mechanical receptors nestled within skin. This natural gift allows objects to be handled deftly and tools to be wielded effortlessly, infusing daily life with a delightful flexibility. But what if this tactile prowess could be unlocked in robots?

Imagine a world where robots possess the uncanny ability to detect and understand friction and slip, just like we do. This tantalizing concept fuels the pursuit of flexible tactile detection, a cutting-edge field that aims to equip robots with a sense of touch akin to our own. By delicately navigating the intricate landscape of friction, robots could conquer diverse operations with precision and grace.

While strides have been made, the development of flexible 3D force sensors is still wobbling through its research stage. Unlike their rigid beam-based counterparts, these sensors hold incredible promise, capable of conforming to various surfaces and executing tasks requiring finesse. However, researchers are still grappling with the challenges of complex structures, intricate decoupling methods, and inconsistent performance.

Photograph of two robotic tactile fingers and the exploded diagram of the finger. Credit: Opto-Electronic Advances (2023). DOI: 10.29026/oea.2023.230076

Drawing inspiration from knots, an interesting idea presents itself — polymer optical fiber knots. The knot’s three-dimensional structure breaks the fiber’s original circular symmetry, redistributing the load on the fiber surface. In turn, this empowers individual fiber units to respond uniquely to force stimuli from different directions, promising the creation of intricate 3D force sensing devices.

Polymer optical fibers boast a host of advantages: compactness, robustness, simplified fabrication processes, and cost-effectiveness. This study unravels their responses to directional forces and demonstrates their potential for 3D force measurement.

Building upon this knowledge, an array of fiber knots is meticulously designed and brought to life, enabling successful 3D force detection. By integrating these sensors at a robot’s fingertips, tactile information including pressure, friction and slips becomes accessible. The robotic hand can leverage this intelligence to flawlessly execute advanced tasks such as adaptive grasping and tool manipulation.

This work, published in the journal Opto-Electronic Advances, introduces a new strategy in the construction of 3D force sensors through the combination of knots and optical fibers. By analyzing the structure mechanics of the knot, researchers have successfully improved the sensing performance for pressure, friction and slips. This approach simplifies the system complexity and overcomes computational challenges associated with signal decoupling.

Snapshots during the robot manipulating a knife and a key. Credit: Opto-Electronic Advances (2023). DOI: 10.29026/oea.2023.230076

The integration of a change point detection algorithm enables real-time data processing and selection, resulting in a tactile feedback system for robotic manipulation. This system measures the normal and shear forces, and detects slips, allowing robots to adaptively grasp objects and manipulate tools. Demonstrations presented in this article include the adaptive grasping of moving objects and the dexterous manipulation of a knife and a key with a two-fingered robotic hand.

The use of polymer optical fiber makes the knot sensor lightweight, flexible, and inexpensive. The non-metallic nature of the sensor makes it immune to electromagnetic interference. The fabrication process is straightforward and does not require sophisticated micro/nano-processing techniques, significantly lowering the barrier for sensor production.

Furthermore, the polymer optical fibers not only serve as sensing elements but also act as signal transmission channels. This feature simplifies the signal acquisition process of the sensor array, which is beneficial to system integration. Additionally, a single polymer optical fiber can be used to create multiple fiber knots, allowing for additional functions through the incorporation of functional materials or strategies such as multi-wavelength signal multiplexing.

 

Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation

by William Shen et al in arXiv

Imagine you’re visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed. Inspired by humans’ ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (F3RM), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.

F3RM offers robots the ability to interpret open-ended text prompts using natural language, helping the machines manipulate objects. As a result, the machines can understand less-specific requests from humans and still complete the desired task. For example, if a user asks the robot to “pick up a tall mug,” the robot can locate and grab the item that best fits that description.

Feature Fields for Robotic Manipulation (F3RM) enables robots to interpret open-ended text prompts using natural language, helping the machines manipulate unfamiliar objects. The system’s 3D feature fields could be helpful in environments that contain thousands of objects, such as warehouses. Credit: William Shen et al

“Making robots that can actually generalize in the real world is incredibly hard,” says Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we’ve never seen them before.”

The method could assist robots with picking items in large fulfillment centers with inevitable clutter and unpredictability. In these warehouses, robots are often given a description of the inventory that they’re required to identify. The robots must match the text provided to an object, regardless of variations in packaging, so that customers’ orders are shipped correctly.

For example, the fulfillment centers of major online retailers can contain millions of items, many of which a robot will have never encountered before. To operate at such a scale, robots need to understand the geometry and semantics of different items, with some being in tight spaces. With F3RM’s advanced spatial and semantic perception abilities, a robot could become more effective at locating an object, placing it in a bin, and then sending it along for packaging. Ultimately, this would help factory workers ship customers’ orders more efficiently.

“One thing that often surprises people with F3RM is that the same system also works on a room and building scale, and can be used to build simulation environments for robot learning and large maps,” says Yang. “But before we scale up this work further, we want to first make this system work really fast. This way, we can use this type of representation for more dynamic robotic control tasks, hopefully in real-time, so that robots that handle more dynamic tasks can use it for perception.”

The MIT team notes that F3RM’s ability to understand different scenes could make it useful in urban and household environments. For example, the approach could help personalized robots identify and pick up specific items. The system aids robots in grasping their surroundings — both physically and perceptively.

“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,’” says senior author Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator.

“Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D, and what our work shows is that this combination is especially useful for robotic tasks, which require manipulating objects in 3D.”

F3RM begins to understand its surroundings by taking pictures on a selfie stick. The mounted camera snaps 50 images at different poses, enabling it to build a neural radiance field (NeRF), a deep learning method that takes 2D images to construct a 3D scene. This collage of RGB photos creates a “digital twin” of its surroundings in the form of a 360-degree representation of what’s nearby.

In addition to a highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. The system uses CLIP, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. By reconstructing the 2D CLIP features for the images taken by the selfie stick, F3RM effectively lifts the 2D features into a 3D representation.

After receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. Once a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by the user. Each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. The highest-scored grasp is then chosen and executed.

To demonstrate the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” While F3RM had never been directly trained to pick up a toy of the cartoon superhero, the robot used its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.

F3RM also enables users to specify which object they want the robot to handle at different levels of linguistic detail. For example, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” If the bot sees two glass mugs and one of them is filled with coffee and the other with juice, the user can ask for the “glass mug with coffee.” The foundation model features embedded within the feature field enable this level of open-ended understanding.

“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” says MIT Ph.D. student, CSAIL affiliate, and co-lead author William Shen.

“F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”

 

Injectable tissue prosthesis for instantaneous closed-loop rehabilitation

by Subin Jin, Heewon Choi, Duhwan Seong, Chang-Lim You, Jong-Sun Kang, Seunghyok Rho, Won Bo Lee, Donghee Son, Mikyung Shin in Nature

In a recent publication in the journal Nature, researchers from the Institute of Basic Science (IBS) in South Korea have made significant strides in biomaterial technology and rehabilitation medicine. They’ve developed a novel approach to healing muscle injury by employing “injectable tissue prosthesis” in the form of conductive hydrogels and combining it with a robot-assisted rehabilitation system.

Let’s imagine you are swimming in the ocean. A giant shark approaches and bites a huge chunk of meat out of your thigh, resulting in a complete loss of motor/sensor function in your leg. If left untreated, such severe muscle damage would result in permanent loss of function and disability. How on Earth will you be able to recover from this kind of injury?

Traditional rehabilitation methods for these kinds of muscle injuries have long sought an efficient closed-loop gait rehabilitation system that merges lightweight exoskeletons and wearable/implantable devices. Such assistive prosthetic system is required to aid the patients through the process of recovering sensory and motor functions linked to nerve and muscle damage.

Unfortunately, the mechanical properties and rigid nature of existing electronic materials render them incompatible with soft tissues. This leads to friction and potential inflammation, stalling patient rehabilitation.

To overcome these limitations, the IBS researchers turned to a material commonly used as a wrinkle-smoothing filler, called hyaluronic acid. Using this substance, an injectable hydrogel was developed for “tissue prostheses,” which can temporarily fill the gap of the missing muscle/nerve tissues while it regenerates. The injectable nature of this material gives it a significant advantage over traditional bioelectronic devices, which are unsuitable for narrow, deep, or small areas, and necessitate invasive surgeries.

Thanks to its highly “tissue-like” properties, this hydrogel seamlessly interfaces with biological tissues and can be easily administered to hard-to-reach body areas without surgery. The reversible and irreversible crosslinks within the hydrogel adapt to high shear stress during injection, ensuring excellent mechanical stability. This hydrogel also incorporates gold nanoparticles, which gives it decent electrical properties. Its conductive nature allows for the effective transmission of electrophysiological signals between the two ends of injured tissues. In addition, the hydrogel is biodegrdable, meaning that the patients do not need to get surgery again.

With mechanical properties akin to natural tissues, exceptional tissue adhesion, and injectable characteristics, researchers believe this material offers a novel approach to rehabilitation.

Next, the researchers put this novel idea to the test in rodent models. To simulate volumetric muscle loss injury, a large chunk of muscle has been removed from the hind legs of these animals. By injecting the hydrogel and implanting the two kinds of stretchable tissue-interfacing devices for electrical sensing and stimulation, the researchers were able to improve the gait in the “injured” rodents. The hydrogel prosthetics were combined with robot assistance, guided by muscle electromyography signals. Together, the two helped enhance the animal’s gait without nerve stimulation. Furthermore, muscle tissue regeneration was effectively improved over the long term after the conductive hydrogel was used to fill muscle damage.

The injectable conductive hydrogel developed in this study excels in electrophysiological signal recording and stimulation performance, offering the potential to expand its applications. It presents a fresh approach to the field of bioelectronic devices and holds promise as a soft tissue prosthesis for rehabilitation support.

Emphasizing the significance of the research, Professor SHIN Mikyung notes, “We’ve created an injectable, mechanically tough, and electrically conductive soft tissue prosthesis ideal for addressing severe muscle damage requiring neuromusculoskeletal rehabilitation. The development of this injectable hydrogel, utilizing a novel cross-linking method, is a notable achievement. We believe it will be applicable not only in muscles and peripheral nerves but also in various organs like the brain and heart.”

Professor SON Donghee added, “In this study, the closed-loop gait rehabilitation system entailing tough injectable hydrogel and stretchable and self-healing sensors could significantly enhance the rehabilitation prospects for patients with neurological and musculoskeletal challenges. It could also play a vital role in precise diagnosis and treatment across various organs in the human body.”

 

Robot stand-in mimics movements in VR

by researchers from Cornell and Brown University

Researchers have developed a souped-up telepresence robot that responds automatically and in real-time to a remote user’s movements and gestures made in virtual reality.

The robotic system, called VRoxy, allows a remote user in a small space, like an office, to collaborate via VR with teammates in a much larger space. VRoxy represents the latest in remote, robotic embodiment.

Donning a VR headset, a user has access to two view modes: Live mode shows an immersive image of the collaborative space in real time for interactions with local collaborators, while navigational mode displays rendered pathways of the room, allowing remote users to “teleport” to where they’d like to go. This navigation mode allows for quicker, smoother mobility for the remote user and limits motion sickness.

The system’s automatic nature lets remote teammates focus solely on collaboration rather than on manually steering the robot, researchers said.

“The great benefit of virtual reality is we can leverage all kinds of locomotion techniques that people use in virtual reality games, like instantly moving from one position to another,” said Mose Sakashita, a doctoral student in the field of information science at Cornell. “This functionality enables remote users to physically occupy a very limited amount of space but collaborate with teammates in a much larger remote environment.”

VRoxy’s automatic, real-time responsiveness is key for both remote and local teammates, researchers said. With a robot proxy like VRoxy, a remote teammate confined to a small office can interact in a group activity held in a much larger space, like in a design collaboration scenario.

For teammates, the VRoxy robot automatically mimics the user’s body position and other vital nonverbal cues that are otherwise lost with telepresence robots and on Zoom. For instance, VRoxy’s monitor — which displays a rendering of the user’s face — will tilt accordingly depending on the user’s focus.

It is equipped with a 360-degree camera, a monitor that displays facial expressions captured by the user’s VR headset, a robotic pointer finger and omnidirectional wheels.

 

First demonstration of in-memory computing crossbar using multi-level Cell FeFET

by Taha Soliman, Swetaki Chatterjee, Nellie Laleni, Franz Müller, Tobias Kirchner, Norbert Wehn, Thomas Kämpfe, Yogesh Singh Chauhan, Hussam Amrouch in Nature Communications

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms, and robotic applications.

The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy.

“As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM).

The transistors on which he performs calculations and stores data measure just 28 nanometers, with millions of them placed on each of the new AI chips. The chips of the future will have to be faster and more efficient than earlier ones. Consequently, they cannot heat up as quickly. This is essential if they are to support such applications as real-time calculations when a drone is in flight, for example.

“Tasks like this are extremely complex and energy-hungry for a computer,” explains the professor.

These key requirements for a chip are summed up mathematically by the parameter TOPS/W: “tera-operations per second per watt.” This can be seen as the currency for the chips of the future. The question is how many trillion operations (TOP) a processor can perform per second (S) when provided with one watt (W) of power. The new AI chip, developed in a collaboration between Bosch and Fraunhofer IMPS and supported in the production process by the US company GlobalFoundries, can deliver 885 TOPS/W. This makes it twice as powerful as comparable AI chips, including a MRAM chip by Samsung. CMOS chips, which are now commonly used, operate in the range of 10–20 TOPS/W.

The researchers borrowed the principle of modern chip architecture from humans.

“In the brain, neurons handle the processing of signals, while synapses are capable of remembering this information,” says Amrouch, describing how people are able to learn and recall complex interrelationships.

To do this, the chip uses “ferroelectric” (FeFET) transistors. These are electronic switches that incorporate special additional characteristics (reversal of poles when a voltage is applied) and can store information even when cut off from the power source. In addition, they guarantee the simultaneous storage and processing of data within the transistors.

“Now we can build highly efficient chipsets that can be used for such applications as deep learning, generative AI or robotics, for example where data have to be processed where they are generated,” believes Amrouch.

The goal is to use the chip to run deep learning algorithms, recognize objects in space or process data from drones in flight with no time lag. However, the professor from the integrated Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM believes that it will be a few years before this is achieved. He thinks that it will be three to five years, at the soonest, before the first in-memory chips suitable for real-world applications become available. One reason for this, among others, lies in the security requirements of industry. Before a technology of this kind can be used in the automotive industry, for example, it is not enough for it to function reliably. It also has to meet the specific criteria of the sector.

“This again highlights the importance of interdisciplinary collaboration with researchers from various disciplines such as computer science, informatics and electrical engineering,” says the hardware expert Amrouch. He sees this as a special strength of MIRMI.

 

Touch, press and stroke: a soft capacitive sensor skin

by Mirza S. Sarwar, Ryusuke Ishizaki, Kieran Morton, Claire Preston, Tan Nguyen, Xu Fan, Bertille Dupont, Leanna Hogarth, Takahide Yoshiike, Ruixin Qiu, Yiting Wu, Shahriar Mirabbasi, John D. W. Madden in Scientific Reports

Smart, stretchable and highly sensitive, a new soft sensor developed by UBC and Honda researchers opens the door to a wide range of applications in robotics and prosthetics.

When applied to the surface of a prosthetic arm or a robotic limb, the sensor skin provides touch sensitivity and dexterity, enabling tasks that can be difficult for machines such as picking up a piece of soft fruit. The sensor is also soft to the touch, like human skin, which helps make human interactions safer and more lifelike.

The sensor and working principle. (A) Top view of electrode architecture (left) and side view (right) (B) Sensor electrode layout showing four top electrodes (blue E1–E4) and one bottom one (red). Electric fields couple directly between the top and bottom electrodes (X1), while some fringing fields (X2, X3) extend above the plane of the device and can couple into a finger for proximity detection. The device is a mutual capacitive sensor in which in (D) an applied pressure displaces the top electrodes (originally grey) downwards (blue) to increase coupling with the bottom electrode (red), while (E) shear is detected by the lateral displacement and varying overlap of the top and bottom electrodes (dielectric omitted for clarity). In (F) portions of the sensor are cut away to display the structure and electrode arrangement. (G) Cross-section of sensor showing localized buckling upon shearing with a finger.

“Our sensor can sense several types of forces, allowing a prosthetic or robotic arm to respond to tactile stimuli with dexterity and precision. For instance, the arm can hold fragile objects like an egg or a glass of water without crushing or dropping them,” said study author Dr. Mirza Saquib Sarwar, who created the sensor as part of his PhD work in electrical and computer engineering at UBC’s faculty of applied science.

The sensor is primarily composed of silicone rubber, the same material used to make many skin special effects in movies. The team’s unique design gives it the ability to buckle and wrinkle, just like human skin.

“Our sensor uses weak electric fields to sense objects, even at a distance, much as touchscreens do. But unlike touchscreens, this sensor is supple and can detect forces into and along its surface. This unique combination is key to adoption of the technology for robots that are in contact with people,” explained Dr. John Madden, senior study author and a professor of electrical and computer engineering who leads the Advanced Materials and Process Engineering Laboratory (AMPEL) at UBC.

The UBC team developed the technology in collaboration with Frontier Robotics, Honda’s research institute. Honda has been innovating in humanoid robotics since the 1980s, and developed the well-known ASIMO robot. It has also developed devices to assist walking, and the emerging Honda Avatar Robot.

“Dr. Madden’s lab has significant expertise in flexible sensors and we’re happy to collaborate with this team in developing tactile sensors that can be applied to robots,” said Mr. Ishizaki Ryusuke, one of the study’s lead authors and chief engineer at Frontier Robotics.

The researchers say the new sensor is simple to fabricate, which makes it easy to scale to cover large surface areas and to manufacture large quantities.

Fabrication of the sensor. (A) Step 1: Mold. Mold filled with Ecoflex™, with the square pillars that will form the top surface of the skin shown at left and the X-shaped supporsts that form the bottom layer shown at right. (B) Step 2: Pattern electrodes (black) using carbon black-Ecoflex™ composite by doctor blading through a shadow mask. © Spin on encapsulating layer to prevent external electrical contact. (D) Step 3: Bond layers using a thin layer of uncured Ecoflex™ which glues the square pillars to the base layer.

Dr. Madden noted that sensors and intelligence are making machines more capable and lifelike, increasingly allowing people to work and play alongside them, but much more can be achieved.

“Human skin has a hundred times more sensing points on a fingertip than our technology does, making it easier to light a match or sew. As sensors continue to evolve to be more skin-like, and can also detect temperature and even damage, there is a need for robots to be smarter about which sensors to pay attention to and how to respond. Developments in sensors and artificial intelligence will need to go hand in hand.”

Sensor response to different stimuli. (A) Non-linear elastic response similar to human skin. E1 is a smaller effective elastic modulus for small strains while E2 is a larger elastic modulus for larger strains. (B) Response to a hovering finger © a light touch (D) a press and (E) a shear. All capacitive axes are in %ΔC.

 

The pigeon as a machine: Complex category structures can be acquired by a simple associative model

by Brandon M. Turner, Edward A. Wasserman in iScience

A new study provides evidence that pigeons tackle some problems just as artificial intelligence would — allowing them to solve difficult tasks that would vex humans.

Previous research had shown pigeons learned how to solve complex categorization tasks that human ways of thinking — like selective attention and explicit rule use — would not be useful in solving.

Researchers had theorized that pigeons used a “brute force” method of solving problems that is similar to what is used in AI models, said Brandon Turner, lead author of the new study and professor of psychology at The Ohio State University.

But this study may have proven it: Turner and a colleague tested a simple AI model to see if it could solve the problems in the way they thought pigeons did — and it worked.

“We found really strong evidence that the mechanisms guiding pigeon learning are remarkably similar to the same principles that guide modern machine learning and AI techniques,” Turner said. “Our findings suggest that in the pigeon, nature may have found a way to make an incredibly efficient learner that has no ability to generalize or extrapolate like humans would.”

Turner conducted the study with Edward Wasserman, a professor of psychology at the University of Iowa. Their results were published recently in the journal iScience.

In the study, pigeons were shown a stimulus, which could include lines of various widths and angles, concentric rings and sectioned rings. They had to peck a button on the right or left to indicate to which category it belonged to. If they got it correct, they received a food pellet — if they were wrong, they received nothing.

There were four different tasks in the study, some harder than the others. Results showed that, through trial and error, the pigeons improved their ability to make the correct choices in one of the easier experiments from about 55% to 95% of the time. Even in a more difficult scenario, their correct responses improved from 55% to 68%.

Researchers believed the pigeons used what is called associative learning, which is linking two phenomena with each other. For example, it is easy to understand the link between “water” and “wet.” People teach their dogs to link sitting when they are commanded with receiving a treat.

But those associations are relatively easy.

“Associative learning is frequently presumed to be far too primitive and rigid to explain complex visual categorization like what we saw the pigeons do,” Turner said.

But that’s exactly what the researchers found.

The researchers’ AI model tackled the same tasks using just the two simple mechanisms that pigeons were presumed to use: associative learning and error correction. And, like the pigeons, the AI model learned to make the right predictions to significantly increase the number of correct answers.

For humans, the challenge when given tasks like those given to pigeons is that they would try to come up with a rule or rules that could make the task easier.

“But in this case, there were no rules that could help make this any easier. That really frustrates humans and they often give up on tasks like this,” he said. “Pigeons don’t try to make rules. They just use this brute force way of trial and error and associative learning and in some specific types of tasks that helps them perform better than humans.”

What’s interesting, though, is that pigeons use this method of learning that is very similar to AI designed by humans, Turner said.

“We celebrate how smart we are that we designed artificial intelligence, at the same time we disparage pigeons as dim-witted animals,” he said. “But the learning principles that guide the behaviors of these AI machines are pretty similar to what pigeons use.”

 

Speaking fast and slow: How speech rate of digital assistants affects likelihood to use

by Brett Christenson, Christine Ringler, Nancy J. Sirianni in Journal of Business Research

Voice speed and interaction style may determine whether a user sees a digital assistant like Alexa or Siri as a helpful partner or something to control, according to a team led by Penn State researchers. The findings reveal insights into the parasocial, or one-sided, relationships that people can form with digital assistants, according to the researchers.

“We endow these digital assistants with personalities and human characteristics, and it impacts how we interact with the devices,” said Brett Christenson, assistant clinical professor of marketing at Penn State and first author of the study. “If you could design the perfect voice for every consumer, it could be a very useful tool.”

The researchers found that a digital assistant’s moderate talking speed, compared to faster and slower speeds, increased the likelihood that a person would use the assistant. In addition, conversation-like interactions, rather than monologues, mitigated the negative effects of faster and slower voice speeds and increased user trust in the digital assistant, according to the researchers.

“As people adopt devices that can speak to them, having a consistent, branded voice can be used as a strategic competitive tool,” Christenson said. “What this paper shows is that when you’re designing the voice of a digital assistant, not all voices are equal in terms of their impact on the customer.”

Christenson and his colleagues conducted three experiments to measure how changing the voice speed and interaction style of a digital assistant affected a user’s likelihood to use and trust the device. In the first study, they asked 753 participants to use a digital assistant to help them create a personal budget. The digital assistant recited a monological, or one-way, script at either a slow, moderate or fast pace.

The researchers then asked the participants how likely they would be to use the digital assistant to create a personal budget, measuring responses from one, not at all likely, to seven, very likely. They found that participants who heard the moderate voice speed were more likely to use the digital assistant than those who heard the slow or fast voices.

In the second study, the researchers asked 266 participants to use a digital assistant to create a personal health plan. In addition to hearing a slow, moderate or fast voice, participants either heard a monological script or were encouraged to interact with the digital assistant in a back-and-forth dialogue. The researchers then asked participants how likely they would be to use the digital assistant for creating a health plan and other tasks as well as their likelihood to share their contact information with the digital assistant.

They found that participants who heard the monologue at a moderate speed were about 5.5 times more likely to share their contact information than those who heard the fast voice speed and approximately twice as likely than those who heard the slow voice speed. On the other hand, the dialogical interaction style appeared to mitigate the negative effects of faster and slower voice speeds, suggesting that conversation-like interactions may increase user trust in digital assistants that speak faster or slower.

In the third study, 252 participants used an Amazon Echo to find a dessert recipe. The digital assistant spoke at either a slow, moderate or fast pace and in a monological or dialogical style. In addition to asking how likely the participants would be to use the device to order recipe ingredients, the researchers asked the participants if they saw Alexa as more of a servant or partner and more like a robot or a human.

The researchers found that participant likelihood to use the device to order ingredients corresponded to the results of the previous two studies. They also found that regardless of speech rate or interaction style, the participants largely saw Alexa as a robot. Moderate and fast speech rates as well as the dialogical interaction style made participants more likely to see Alexa as a partner compared to the slower speech rate and monological interaction style.

“The reason we conducted three different studies was to establish a status quo and then build upon that by adding one piece at a time and, in doing so, establish replicable results,” Christenson said. “We saw the same result in study one, two and three regarding people’s reactions to a slow versus a fast versus a moderate speaking voice. We found that people really like a moderate rate of speed.”

 

Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence

by Joe Burton in Technology in Society

Artificial Intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says a Lancaster University academic.

Professor of International Security at Lancaster University Joe Burton argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence — posing a threat to national security.

Further to this, he says, securitization processes (presenting technology as an existential threat) have been instrumental in how AI has been designed, used and to the harmful outcomes it has generated.

Professor Burton’s article ‘Algorithmic extremism? The securitization of Artificial Intelligence (AI) and its impact on radicalism, polarization and political violence’ is published in Elsevier’s high impact Technology in Society Journal.

“AI is often framed as a tool to be used to counter violent extremism,” says Professor Burton. “Here is the other side of the debate.”

The paper looks at how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarizing, radicalizing effects that have contributed to political violence.

The article cites the classic film series, The Terminator, which depicted a holocaust committed by a ‘sophisticated and malignant’ artificial intelligence, as doing more than anything to frame popular awareness of Artificial Intelligence and the fear that machine consciousness could lead to devastating consequences for humanity — in this case a nuclear war and a deliberate attempt to exterminate a species.

“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality,” writes Professor Burton.

The role of sophisticated drones, such as those being used in the war in Ukraine, are, says Professor Burton, now capable of full autonomy including functions such as target identification and recognition.

And, while there has been a broad and influential campaign debate, including at the UN, to ban ‘killer robots’ and to keep the human in the loop when it comes to life-or-death decision-making, the acceleration and integration into armed drones has, he says, continued apace.

In cyber security — the security of computers and computer networks — AI is being used in a major way with the most prevalent area being (dis)information and online psychological warfare.

The article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts.

The paper concludes with a strong message to researchers working in cyber security and International Relations.

“AI is certainly capable of transforming societies in positive ways but also presents risks which need to be better understood and managed,” writes Professor Burton, an expert in cyber conflict and emerging technologies and who is part of the University’s Security and Protection Science initiative.

“Understanding the divisive effects of the technology at all stages of its development and use is clearly vital. Scholars working in cyber security and International Relations have an opportunity to build these factors into the emerging AI research agenda and avoid treating AI as a politically neutral technology. In other words, the security of AI systems, and how they are used in international, geopolitical struggles, should not override concerns about their social effects.”

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

Tags

Robotics
Robots
shape-changing robot
AI