No code artificial intelligence-powered applications and services providing company Sway AI partners with hybrid multi-cloud network deploying firm Trilogy Networks to promote AI technology for precision farming in the United States.
Together the companies will join the Rural Cloud Initiative (RCI) to boost the digital transformation of rural America. RCI is bringing edge computing’s economic and efficiency benefits to rural areas of the country.
The RCI was created by Trilogy Networks in 2020 with the goal of deploying a unified, dispersed cloud across 1.5 million square miles of rural America.
The partnership will allow farmers to receive a complete artificial intelligence-powered precision agriculture solution from Sway AI and Trilogy Networks.
“The sheer thought of the different technologies and applications a farmer has to rely on can be overwhelming, creates a barrier to entry, and also discourages sustained use,” said Co-founder and Chief Product Officer at Sway AI Jitender Arora.
He further added that they would be able to provide farmers meaningful data and complete insights for precision farming utilizing the Sway AI unified application as the RCI’s newest partner.
As technology continues to evolve at an exponential pace, agricultural software and applications are becoming more widely used as they provide impactful and valuable data. The challenge is that there is no all-in-one solution available in the market that caters to all the needs of farmers.
With this new partnership, Sway AI and Trilogy Networks will unify data and improve decision-making across the whole production process to assist farmers.
COO of Trilogy Networks, Nancy Shemwell, said, “AgTech made simple – is the key – Trilogy’s FarmGrid™ is the answer. A digital agriculture platform simplifying connectivity challenges, delivering and supporting AgTech applications on a single platform allows the grower to see and utilize the data being gathered by a variety of IoT devices.”
Product recommendation software providing company Depict.ai raises $17 million in its recently held series A funding round led by Tiger Global.
Other investors like Initialized Capital, EQT Ventures, Y Combinator, and a team of angels also participated in the latest funding round of Depict.ai.
According to the company, it plans to use the freshly raised funds to further refine its recommendation engine, expand its workforce, and also expand into other global markets like the United States and Europe.
Depict.ai’s one-of-a-kind solution can be used by any eCommerce company to provide Amazon-class recommendation features to the customers, which helps in providing an enhanced shopping experience.
To be precise, Depict.ai’s technology is a plug-and-play solution that understands all of a retailer’s products.
“Depict.ai’s AI-based product recommendation platform is completely novel because it does not require historical sales data, enables online retailers of any size to deliver high-quality recommendations, a key driver of increased revenues,” said John Curtius, Partner, Tiger Global.
He also mentioned that they are delighted to work with Depict.ai and its team as the company continues to expand into new markets.
The company’s system used deep learning technology to effectively identify product images and metadata to understand a retailer’s offer and provide a highly accurate recommendation service. An added advantage of Depict.ai’s solution is that it can offer the recommendation feature without the need for entering any previous sales data.
Sweden-based technology company Depict.ai was founded by Anton Osika and Oliver Edholm in 2019. To date, the firm has raised around $19.9 million over two funding rounds.
Founder and CEO of Depict.ai, Oliver Edholm, said, “Our ambition is to bring retailers all the AI infrastructure they need for product discovery, so they can focus on delighting their customers with great products instead of worrying about technical complexities.”
He further added that until now, AI benefits had been reserved for eCommerce tech behemoths, but they are about to change that by providing every eCommerce business with the AI technology they need to produce world-class product suggestions.
DeepMind, Google’s artificial intelligence subsidiary, has trained an AI to regulate the superheated plasma within a nuclear fusion reactor, paving the way for endless clean energy to arrive sooner.
DeepMind used its powerful deep learning technologies in partnership with the Swiss Plasma Center (SPC) at Ecole Polytechnique Federale de Lausanne (EPFL) to manipulate superheated plasma within a magnet-based reactor known as a “variable-configuration tokamak” (TCV). A tokamak is a doughnut-shaped vacuum encased by electromagnetic coils that holds a hydrogen plasma hotter than the Sun’s core. However, the plasmas in these devices are profoundly unstable and must be controlled to extract energy from the fusion reactions.
3D model of the TCV vacuum vessel. (DeepMind/SPC/EPFL)
The Swiss Plasma Center at EPFL uses tokamak to explore the best conditions for confining continually changing plasma. The form and distribution of the plasma in the tokamak can be altered by varying the voltage in the 19 magnets that keep it in place. To guarantee that, the plasma never reaches the vessel’s walls, which would result in heat loss and perhaps damage, a control system is used to coordinate the tokamak’s several magnetic coils and regulate the voltage on them thousands of times per second. However, testing new plasma configurations by altering the tokamak’s linked settings necessitates a large amount of technical and design work.
According to an article published in the journal Nature, researchers from the two organizations employed a deep reinforcement learning system to control the 19 magnetic coils within the variable-configuration tokamak at the Swiss Plasma Center. The success of this setup will pave the way to shaping the design of larger fusion reactors in the future.
The researchers achieved this accomplishment by training their AI system in a tokamak simulator, where the neural network named critic learned how to negotiate the complexity of magnetic confinement of plasma through trial and error. It all began with monitoring how different settings on each of the 19 coils influenced the form of the plasma inside the vessel.
The AI-based system was able to develop and maintain a broad range of plasma forms and advanced configurations after being trained, including one in which two independent plasmas are maintained in the vessel at the same time. Finally, the researchers put their new system through trials on the tokamak to assess how it would operate in real-world scenarios. Here, they embedded the critic’s capability in another neural network called actor, which is a smaller, quicker network that operates on the reactor itself.
The RL system shaped plasma into a variety of configurations within the reactor by regulating the SPC’s variable configuration tokamak, including one that had never been seen previously in the TCV: stabilizing ‘droplets’ where two plasmas co-existed concurrently inside the device. Some of the configurations included a D-shaped cross-section similar to what would be utilized inside ITER (previously the International Thermonuclear Experimental Reactor), the large-scale experimental tokamak under development in France, and a snowflake arrangement that might help distribute the high heat of the reaction more uniformly across the vessel.
Visualization of controlled plasma shapes. (DeepMind/SPC/EPFL)
Eventually, it successfully contained the plasma for roughly 2 seconds, which is approaching the reactor’s limits i.e., TCV can only sustain the plasma for 3 seconds in a single experiment before cooling down for 15 minutes. The record for fusion reactors is 5 seconds, which was recently established by the Joint European Torus in the United Kingdom.
According to Federico Felici of EPFL, while there are various theoretical ways that could be deployed to contain the plasma using a magnetic coil, scientists have tried-and-tested strategies. On the other hand, the AI astounded the team with its new way of creating the same plasma structures using coils. As per Felici, the reinforcement learning AI system opted to employ the TCV coils in a completely new method, which nonetheless provides a similar magnetic field. This implies it was still producing plasma as anticipated, but it was using the magnetic cores in an entirely new way because it had unlimited flexibility to explore the whole operating space.
“So people were looking at these experimental results about how the coil currents evolve and they were pretty surprised,” Felici adds.
Nuclear fusion is a process that includes colliding and fusing hydrogen, a common element in water, to power the stars of the universe. The process, which releases massive amounts of energy, has been proclaimed as a potentially unlimited source of renewable energy, but it still faces a number of technical hurdles. The sheer gravitational mass of stars is enough to force hydrogen atoms together and overcome their opposing charges. But on Earth, magnetic coils are a must to sustain a controlled reaction.
While impressive, DeepMind’s breakthrough is simply the first step toward a practical fusion energy source. According to the laboratory, simulating a tokamak takes many hours of computer processing for one second of actual time. Furthermore, because the state of a tokamak might change from day to day, algorithmic enhancements must be created both physically and in simulation.
For the time being, this initiative should open the path for EPFL to pursue future combined research and development possibilities with other firms. “We’re always open to innovative win-win collaborations where we can share ideas and explore new perspectives, thereby speeding the pace of technological development,” says Ambrogio Fasoli, the director of the SPC and a co-author of the study.
Technology giant Amazon announces that its new solar power plant in South Africa is now operational and is delivering energy and opportunities.
The 10-megawatt solar plant located in the Northern Cape province is the largest solar power plant in the country. According to Amazon, the project is planned to produce up to 28,000 megawatt-hours (MWh) of renewable energy each year, enough to power around 8,000 average South African homes for a year.
Apart from providing clean everyday electricity, the power plant will help in drastically boosting the economy of the region. The solar project is majorly-owned by black women and operated by a South African-owned enterprise.
“Energy projects that enable black investment are our surest way to a just transition to renewable energy,” said Meta Mhlarhi, an investor in the solar plant project.
The solar power plant is also a significant step forward towards the country’s 2030 renewable energy goals. According to Amazon, the solar plant’s design will prevent an estimated 25,000 tonnes of carbon emissions per year. It is a massive achievement as it is equivalent to removing 5,400 cars off South Africa’s roads.
In addition, the construction of the solar plant supported 167 employment opportunities in the local community, and it will continue to do so for maintenance, operations, and security purposes.
Director of energy at Amazon Web Services, Nat Sahlstrom, said, “Amazon is committed to working with governments and utility suppliers around the world to help bring more renewable energy projects online.”
He further added that they are honored to collaborate with the Department of Minerals and Energy, the South African National Energy Regulator, and Eskom to develop a new model for renewable energy generation in the country.
At Meta AI’s Inside the Lab event on 23 Feb 2022, Yann LeCun, Meta AI’s chief scientist, proposes that AI’s ability to approach human-like capability is a matter of the ability to learn the internal architecture of how the world works. He notes that a teenager can learn to drive in 20 hours. On the other hand, an autonomous driving system requires billions of labeled data for training and millions of reinforcement learning trials. Yet they fall short of human’s capability to drive cars. He proposes a 6 Modules Architecture of Common Sense to Achieve Autonomous Intelligence during the event.
LeCun believes that the next AI revolution will come when AI systems no longer rely on supervised learning. He hypothesizes that humans and nonhuman animals can learn about the world through observation and small amounts of interactions, often called common sense. He also said that AI systems would have to learn from the world itself with minimal help from humans, which can be achieved with common sense.
“Human and nonhuman animals seem able to learn enormous amounts of background knowledge about how the world works through observation and through an incomprehensibly small amount of interactions in a task-independent, unsupervised way,” LeCun says. “It can be hypothesized that this accumulated knowledge may constitute the basis for what is often called common sense.”
LeCun proposed an architecture of six separate, differential modules that can easily compute gradient estimates of the objective function with respect to input and propagate the information to upstream modules. This common-sense architecture can help AI systems to achieve autonomous intelligence. The six modules are configurator, perception, world model, short-term memory, actor, and cost.
The configurator module is for executive control, like executing a given task. It’s also responsible for pre-configuring the perception, world model, cost, and the actor module by modulating the parameters of those modules.
The perception module receives signals from sensors and estimates the current state of the world, but only a small subset of the perceived state of the world is relevant and valuable for a given task.
The world model module has two roles, and it’s the most complex piece of architecture. The first role is to estimate missing information about the state of the world that is not provided by perception to predict the natural evolutions of the world. The second role is to predict plausible future states of the world. The world model module acts as a simulator to the task at hand. It helps represent multiple possible predictions.
The cost module predicts the level of discomfort of the agent and has two submodules: the intrinsic cost and the critic. The former submodule is immutable and computes discomforts like damage to the agent, violation of hard-coded behavioral constraints, etc.). The latter submodule is a trainable module that predicts future values of the intrinsic cost.
The actor module computes proposals for action sequences. “The actor can find an optimal action sequence that minimizes the estimated future cost and output the first action in the optimal sequence, in a fashion similar to classical optimal control,” LeCun says.
The short-term memory module keeps track of the current and predicted world state and associated costs.
The center of this architecture is the predictive world model. Since the real world is not entirely predictable, it is critical to represent it with multiple plausible protections. The challenge is to design a model that can learn the abstract presentations of the world, ignore irrelevant details, and then predict a plausible model.
Meta AI has introduced JEPA or joint embedding predictive architecture that can capture dependencies between two inputs. JEPA can produce informative abstract presentations while eliminating relevant details while predicting the model. The idea is that JEPA will be able to learn the intricacies of the process of the world just as a newborn does by observation.
Meta, formerly known as Facebook, announced its plan to build a digital voice assistant to support the growth and demand of metaverse.
Meta claims that the digital voice assistant will allow users to interact with physical gadgets such as the company’s Portal video-calling device and augmented-reality glasses without using their hands.
The announcement was made by the CEO of Meta, Mark Zuckerberg, during a recently held live event. A Meta representative mentioned that this is the first time Meta has dedicated an entire event to presenting its AI developments.
Meta wants to develop a robust digital assistant that will be able to detect context hints in conversations, as well as other data points about our physical bodies, such as our facial emotions, hand movements, and many others.
According to Zuckerberg, in order to assist consumers in navigating this new online world, digital assistants will need to “understand how people do.”
Zuckerberg said, “When we have glasses on our faces, that will be the first time an AI system will be able to really see the world from our perspective – see what we see, hear what we hear, and more.”
Apart from a digital voice assistant, Meta is also building a new universal language translator for the metaverse to streamline user interactions in the virtual world.
The AI-powered translation system would not only be able to provide instantaneous translation of widely spoken languages but also dialects that do not have a standardized writing format.
As of now, Meta has not decided on a name for its new digital assistant, but it is called the development program ‘Project CAIRaoke.’
People will soon be able to access Zuckerberg’s envisioned metaverse hands-free by wearing a pair of smart glasses because of AI technology that can learn and anticipate people’s behavior.
Software and service for healthcare providing company WellSky announces its plans to acquire leading online healthcare platform TapCloud to further strengthen its patient engagement technology.
The acquisition will allow WellSky to use Tap Cloud’s expertise in virtual patient engagement technology to provide better services to its customers.
WellSky plans to integrate TapCloud with its in-house developed healthcare technology solutions to increase its capabilities. To date, more than 5 million caregivers use WellSky’s healthcare solutions each day.
TapCloud’s AI-driven, interoperable platform provides real-time, patient-generated information, allowing providers to implement care interventions focused on minimizing avoidable hospital readmissions and emergency care.
TapCloud’s technology will considerably help WellSky improve consumer experience and deliver value-based care. Patients can use virtual visit technology, secure messaging, and remote symptom screening processes to exchange their symptoms and other pertinent data with clinicians utilizing TapCloud’s EHR-agnostic patented technology.
CEO of WellSky, Bill Miller, said, “WellSky is connecting every part of health and community care, and TapCloud represents a significant addition to our suite of solutions. By adding these robust capabilities, WellSky will further extend our position as the leading technology and analytics partner across the continuum.”
He further added that WellSky and TapCloud would work together to help providers make evidence-based decisions using actionable insights.
United States-based online healthcare platform providing firm TapCloud was founded by Tom Riley in 2013. The company is known for its platform that is used to connect patients and doctors through real-time data, critical care information & record.
“TapCloud has worked tirelessly to close the communication gap between patients and providers through the use of data and technology. With WellSky, we gain access to a larger network and increased investment, which will broaden our reach and allow even more patients and families to be active participants in their care journeys,” said CEO of TapCloud, Phil Traylor.
He also mentioned that they are in a good position to grow the ways they can assist clients to succeed, regardless of the EHR platform they employ.
Meta, formerly known as Facebook, announces its new free AI learning Alliance to help diverse talents enter the artificial intelligence industry.
The new initiative of Meta and Georgia Tech will help the artificial intelligence industry to boom further by bringing in new talents to encourage and support innovations.
The company is collaborating with a group of colleges to train more people from underrepresented groups in artificial intelligence and to make free online education available.
It’s a semester-long deep learning course that has been meticulously designed to teach learners the principles of neural networks and applications like computer vision and language understanding.
The curriculum involves most of the critical aspects of artificial intelligence that the global industry currently demands. Earlier, Meta had also collaborated with Georgia Tech to build a deep learning course curriculum that has been taken online by 2,400 students since 2020.
This new AILA initiative is the company’s next step in educating students in the field of artificial intelligence. Chair of Computing at Georgia Tech, Charles Isbell, said, “By moving AI instruction online, we can reach more people from a wider range of backgrounds than ever before. This is not only a great opportunity for learners, but also for the field as a whole, which needs a diverse set of voices if it is to responsibly serve a diverse set of communities.”
Meta aims to teach millions of people with this new initiative by making the AILA Education Hub available to everyone, including educators, students, researchers, and hobbyists alike, through Meta’s online education platform named Meta Blueprint.
Meta has tied up with numerous colleges and universities such as Georgia Tech University, Florida Agricultural, and Mechanical University, Morgan State University, Florida International University, University of California Irvine, and many more to offer study material in its AILA program.
Additionally, Meta has collaborated with Georgia Tech’s Dr. Kira to design a series of webinars that will be available on the AILA Education Hub and will assist professors in teaching the course content.
Interested candidates can register for this program from the official website of Meta.
CEO of Meta, Mark Zuckerberg, recently announced that the company plans to build a universal language translator for the metaverse.
The new universal translator will play a vital role in streamlining user interactions in the virtual world of the metaverse. Zuckerberg recently unveiled the plans of creating a translator during a live virtual event hosted on 23rd February 2022.
He claimed that the technology would use artificial intelligence solutions to deliver language translations for metaverse users. Meta believes that when people start to get virtually connected with individuals from different parts of the world, it will be crucial to deploy an effective and accurate translator for the users to interact with each other seamlessly.
Mark Zuckerberg said, “This will enable a vast scale of predictions, decisions, and generation as well as whole new architectures, training methods, and algorithms that can learn from a vast and diverse range of different inputs.”
He further added that the primary objective is to create a universal model that can combine knowledge from various modalities by collecting data through rich sensors.
Additionally, he mentioned that the system would not only be able to provide instantaneous translation of widely spoken languages but also dialects that do not have a standardized writing format.
Facebook, in the past, has always tried to develop technologies that help connect individuals from different parts of the world through the internet, and the company plans to expand this approach in its metaverse too.
“The ability to communicate with anyone in any language — that’s a superpower people have dreamed of forever – and AI is going to deliver that within our lifetimes,” said Zuckerberg.
However, the many challenges Meta is facing right now is the unavailability of quality data to train its algorithm to offer instantaneous translation capabilities for multiple languages.
To date, most of the machine translation systems have been developed to translate only a handful of languages, which is causing data scarcity for Meta and posing a challenge to develop a universal language translator.
Staqu Technologies has developed a one-of-a-kind artificial intelligence-powered surveillance system that uses CCTV cameras and microphones to accurately and effectively spot crimes and also listen to gunshots.
The audio feature integrated into the system named Jarvis makes it a highly capable and useful technology that can be used by government agencies to make Indian roads more secure for pedestrians and travelers.
The company has submitted a bid for audio and video monitoring as part of the Lucknow Smart City project’s bid to boost security in the city.
People familiar with the matter say that it is likely that Staqu would win the tender as the company’s previous generation of Jarvis technology has already been deployed at various locations by the Uttar Pradesh Police and other state police forces.
CEO and founder of Staqu Technologies, Atul Rai, said, “We have used audio analytics to detect incidents such as prison fights in Uttar Pradesh on a pilot basis. Our target is to implement it in smart cities.”
The massive leap in its capabilities exponentially increases the efficiency of Jarvis as now authorities can also analyze the audio to take better and quicker actions against criminals. Jarvis uses convolutional neural networks to identify various kinds of sounds like a gunshot, screams of humans, and many more.
Additionally, organizations in retail and manufacturing are also using the audio analytics tool to detect distress sounds. The previous version of Jarvis incorporated technology like closed-circuit television (CCTV) that captures video footage, which then gets analyzed by an artificial intelligence-powered facial recognition system to spot criminal activities within its range.
Gurgaon-based artificial intelligence startup Staqu Technologies was founded by Abhishek Sharma, Anurag Saini, Atul Rai, Chetan Rexwal, and Pankaj Sharma in 2015. During its seed funding round, the company received funding from investors like Ajay Gupta and Neeraj Sangal. Staqu specializes in providing solutions for challenges involving analyzing images and extracting valuable information from those images.