Microsoft Defender is an anti-malware component that comes with Windows PCs. With the new Threat Intelligence feature, Microsoft will utilize RiskIQ’s tech for scanning the internet and providing data to Defender’s real-time surveillance. Furthermore, RiskIQ’s data will enrich Defender’s existing dataset and provide security teams with a view of the entire attack chain.
A Microsoft executive, Vasu Jakkal, said, “Our mission is to build a safer world for all — and threat intelligence is [at] the heart of it.” Combining the services, users also get a library of raw threat intelligence and analysis from experts.
Meanwhile, the External Attack Surface Management feature will aid the security teams in understanding how an attacker views the network. It provides a way to identify all potential resources of attackers and find the unmanaged ones. Most companies that begin using a service like this are shocked by the number of internet-facing unmanaged assets they discover.
Jakkal added, “With these new tools, Microsoft is giving security teams more data to work with to protect their networks and other assets.”
A computer engineer named James Howells threw away a hard drive containing over 8,000 bitcoins (£150 million) nearly 10 years ago. Per The Guardian, Howells plans to retrieve the bitcoin hard drive using artificial intelligence to operate a mechanical arm to sort through trash. He has not given up hope and wishes to pull the job with AI and robot dogs.
Howells’ bitcoin trash story became quite popular in 2013 when he mistakenly put the wrong hard drive in the trash, losing access to his bitcoin stash, currently worth around £150 million (US$183.1 million)! He wants to convince the council of Newport in south Wales to get permission.
Robot dogs would be used as site security by the bitcoin garbage guy, as some people may recall him, to ensure that no one else would try to locate and take the hard drive. Howells also plans to employ environmental and data recovery experts in his project, which would cost about US$12.2M. The project would span over nine to twelve months, as per Howells’ own estimates.
Howells said, “One of the things we’d like to do on the actual landfill site, once we’ve cleaned it up and recovered that land, is put a power generation facility, maybe a couple of wind turbines.” He wants to create a community-owned mining facility to create bitcoin for Newport residents.
The council of Newport refused even to schedule a meeting with him to hear about his ideas. The environment is a crucial factor behind the city’s reluctance to consider his proposals.
Louis Bouchard, a Chinese scientist, has created a new free tool in collaboration with PetaPixel to restore old and deteriorated pictures with low-resolution and create slightly better ones. The free AI tool GFP-GAN (generative facial prior-generative adversarial network) merges information from 2 AI models to fill in missing details and creates the image while sustaining high quality.
Several AI technologies can create new images from inputs, but not many can fix an old picture. Conventional methods simply fine-tune existing AI models by gauging image differences. GFP-GAN uses a new approach via a pre-trained version of NVIDIA’s StyleGAN-2 model to inform the AI model at several stages of image generation.
GANs are algorithms that use 2 neural networks, the generative model to generate new examples and the adversarial model that classifies them as ‘fake’ or ‘real,’ comparing one against the other.
The restored images produced by the AI do not accurately replicate the original image. Instead, all the components that are added to replace evident traces of deterioration and brighten the original image are model predictions that introduce extra pixels.
The creators have provided a free demo for people to use the tool, along with their code to let developers implement the restoration techniques for their projects. However, the project is constrained by AI’s limitations as it guesses the missing content. The researchers believe there might be a “slight change of identity.”
Regardless of the limitations, the AI tool is doing surprisingly well in accuracy and can remove wrinkles, spots, grains, and a few other telltale signs of damage.
With the help of AI, several fields have made some amazing progress. Existing AI algorithms have greatly benefited disciplines like data analytics, large language models, and others that use enormous amounts of data to detect patterns, learn rules, and then apply them. The foundational idea behind AI is to replicate the functioning of the human brain using arithmetic and digital representations. In other words, while the human brain relies on spiking signals sent across neuron synapses, AI processes data by carrying matrix multiplications. In addition, unlike human neurons, AI models require weeks of training, consume huge power, and are powered by silicon-based chips that are currently hit with a scarcity of resources in the semiconductor industry. Therefore, scientists turned to neuromorphic computing to solve these gnawing concerns.
In essence, neuromorphic computing is the revolutionary concept of designing computer systems that can resemble the brain’s neurological structure. A neuromorphic chip like Intel’s Loihi 2 attempts to simulate the real-time, stimulus-based learning that occurs in brains. Since existing AI models are bound by computational, literal interpretations of events, it is crucial that the next generation of AI should be able to respond quickly to unusual circumstances as the human brain would. Because of how unpredictable and dynamic the world is, AI must be able to deal with any peculiar circumstances that may arise in real-time.
The emergence of neuromorphic computing has prompted major endeavors to design new, nontraditional computational systems based on recurrent neural networks, which are critical to enabling a wide range of modern technological applications such as pattern recognition and autonomous driving.
Most of the existing chip architectures adopt von Neumann architecture, which means that the network uses independent memory and processing units. Currently, data is transferred between computers by being retrieved from memory, moved to the processing unit, processed there, and then returned to memory. This constant back and forth drain both time and energy. When processing massive datasets, the bottleneck it produces is further accentuated.
Despite using less than 20 watts of electricity, human brains still beat supercomputers, proving their superior energy efficiency. By creating artificial neural systems with “neurons” (the actual nodes that process information) and “synapses” (the connections between those nodes), neuromorphic computing can replicate the function and efficiency of the brain. This AI neural network version of our neural network of synapses is called spiking neural networks (SNN), which are arranged in layers, with each spiking neuron able to fire independently and interact with the others. This allows artificial neurons to respond to inputs by initiating a series of changes. This allows researchers to alter the amount of electricity that passes between those nodes to simulate the various intensities of brain impulses.
However, this is a major setback: spiking neural networks are limited in their ability to freely select the resolution of the data they must keep or the times they access it during calculations. They can be thought of as non-linear filters that process data as it passes through them in real-time. These networks need to keep a short-term memory record of their most recent inputs to do real-time processing on a sensory input stream. Without learning, the lives of these memory traces are fundamentally constrained by the network size and the longest time scales that can be handled by the network’s parts. Therefore, developing volatile memory technologies that use fading memory traces is the need of the hour.
Since liquid environments are also necessary for biological neurons, a convergence might be reached by applying nanoscale nonlinear fluid dynamics to neuromorphic computing.
University of California San Diego researchers have created a unique paradigm in which liquids that ordinarily do not interact with light significantly on a micro- or nanoscale, support a sizable nonlinear response to optical fields. According to research published in Advanced Photonics, a nanoscale gold patch that serves as an optical heater and causes variations in the thickness of a liquid layer covering the waveguide would provide a significant light-liquid interaction effect.
Simulation result of light affecting liquid geometry, which in turn affects reflection and transmission properties of the optical mode, thus constituting a two-way light–liquid interaction mechanism. The degree of deformation serves as an optical memory allowing storing the power magnitude of the previous optical pulse and using fluid dynamics to affect the subsequent optical pulse at the same actuation region, thus constituting an architecture where memory is part of the computation process. (Image: Gao et al.)
Researchers explain that here liquid film serves as an optical memory. It operates as follows: There is a mutual interaction between the optical mode and the liquid film when a light in the waveguide modifies the geometry of the liquid surface and changes the liquid surface’s form to impact the waveguide’s optical mode’s characteristics. Notably, when the liquid geometry changes, the optical mode’s characteristics experience a nonlinear response. After the optical pulse ends, the power of the preceding optical pulse can be determined by how much the liquid layer deforms. As mentioned earlier, in contrast to conventional computing methods, the nonlinear response and the memory are located in the same spatial region, which raises the possibility of a compact (beyond von-Neumann) design in which the memory and the computational unit are housed in the same area.
The researchers show how memory and nonlinearity can be combined to create “reservoir computing,” which can carry out digital and analog tasks like handwritten image recognition and nonlinear logic gates.
Their model also makes use of the nonlocality property of liquids. Researchers can now forecast compute enhancements that are not conceivable on platforms made of solid-state materials with a finite nonlocal spatial scale. Despite nonlocality, the model falls short of contemporary solid-state optics-based reservoir computing systems. Nonetheless, the research provides a clear road map for future experimental research in neuromorphic computing seeking to test the predicted effects and investigate complex coupling mechanisms of diverse physical processes in a liquid environment for computation.
Using multiphysics simulations, the researchers predicted various unique nonlinear and nonlocal optical phenomena by investigating the interaction between light, fluid dynamics, heat transfer, and surface tension effects. They take it one step further by showing how they may be applied to construct adaptable, unconventional computational systems. Researchers propose enhancements to state-of-the-art liquid-assisted computation systems by around five orders of magnitude in space and at least two orders of magnitude in speed by using a mature silicon photonics platform.
You can check a YouTube presentation of this research here.
A team of researchers from the University of Illinois Urbana-Champaign has created a novel technique for teaching numerous agents, such as robots and drones, to cooperate using artificial intelligence. Using multi-agent reinforcement learning, a form of artificial intelligence, they created a technique for teaching numerous agents to cooperate. As it enables us to attain high degrees of coordination and collaboration across AI agents, the area of multi-agent reinforcement learning (MARL) is becoming more and more prominent. It examines how different agents interact with one another and with a shared environment, allowing us to observe how they cooperate, coordinate, compete, or collectively learn to complete an assigned assignment.
An illustration of multi-agent reinforcement learning can be a swarm of high-rise fire fighting drones attempting to stop a wildfire. To prevent the wildfire from causing more environmental damage, the drones must work together because each drone can only view a limited portion of its surroundings.
According to Huy Tran, an Illinois aerospace engineer, the study aimed to enable decentralized agent communication. The team also concentrated on circumstances where it is not immediately clear what each agent’s responsibilities or tasks should be.
Because sometimes, it can be confusing what one agent ought to do in contrast to another agent, Tran said this research experiment is far more complicated and demanding. Individual agents, such as, can cooperate and execute tasks even when communication channels are available, but what if they lack the necessary hardware or the signals are blocked, rendering communication impossible? Tran believes how agents can gradually learn to work together to complete a goal makes it an intriguing research topic.
Tran and his colleagues used machine learning to design a utility function that informs the agent when it is functioning in a way that is advantageous to the team to resolve this issue.
The team created a machine learning method that enables us to recognize when a single agent contributes to the overall team goal. This is also important cause with a swarm of robot agents accomplish common or collective goals; it can be challenging to know which agent contributed the most to make it possible. As per Tran, if you compare it to sports, one soccer player may score, but we also want to know about the teammate’s contributions, such as assists.
The researchers’ algorithms also detect whether an agent or robot is acting in a way that doesn’t align with or help achieve the goal. As a result, robot agents just opted to perform something that wasn’t helpful to the overall objective.
The research team used simulations of games like Capture the Flag and StarCraft, a well-known computer game, to evaluate their algorithms. The team was ecstatic to learn that their strategy worked well in StarCraft, which Tran noted was slightly unexpected.
According to Tran, this specific multi-agent reinforcement learning-based algorithm is relevant to many real-world scenarios, including military surveillance, robot collaboration in a warehouse, traffic signal management, delivery coordination by autonomous vehicles, and grid control.
Tran stated that Seung Hyun Kim developed most of the theory behind the proposal as a mechanical engineering undergraduate student, with Neale Van Stralen, an aerospace student, assisting with implementation. Their paper titled, “Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning,” was published in the Proceedings of the 21st International Conference on Autonomous Agents and Multi-agent Systems, which took place in May 2022.
When implemented, reinforcement learning aims to discover an optimum strategy that maximizes the anticipated reward from the surrounding environment. When reinforcement learning is used to control several agents, the term multi-agent reinforcement learning is used. Since each agent attempts to learn its strategy to maximize its reward, MARL is essentially the same as single-agent reinforcement learning. Although it is theoretically feasible to employ a single policy for all agents, doing so would require complicated communication between several agents and a central server, which is difficult in the majority of real-world situations. Instead, decentralized multi-agent reinforcement learning is utilized in reality.
It is critical in the multiple agent robot system to complete path planning in the process of avoiding interference, allocating resources, and exchanging information in a coordinated and effective manner. Most of the solutions in conventional multi-agent coordination algorithms take place in well-known settings, and the agent autonomy is constrained by the predetermined target positions and priorities for each robot or drone. To address the issue of multi-agent coordination utilizing only visual information is still insufficient. Therefore, this research study promises new avenues of multi-agent communication using multi-agent reinforcement learning.
Trichy: National Institute of Technology Trichy (NIT-T) is about to introduce a hybrid MTech program in Artificial Intelligence and Data Science this current semester. The curriculum is specially designed for working professionals and can be completed in 2 years.
Students with bachelor’s degree programs like BTech or BE, or MSc (or any other equivalent degree in computer sciences) from a recognized university are eligible to apply for the course. The classes have been scheduled after standard working hours, keeping in view the schedules of working professionals.
G Aghila, director of NIT Trichy, said, “The course is being jointly offered by Computer Applications, Computer Science and Engineering Management Studies departments. There is no entrance exam for this course. Sixty per cent marks in their respective subject are enough.”
NIT-T has also introduced a flexible curriculum and credit transfer from other institutions. Additionally, there are many options available to pursue the course in sections with different ‘Entry and Exit Points.’ candidates have the possibilities to leave with a certificate with credit points after the completion of six months and pursue it later if they want.
This makes the hybrid MTech course a very flexible and fruitful option for working professionals to enhance their knowledge and qualifications.
NFTs and blockchain technology will no longer be permitted to integrate with Minecraft, according to a recent announcement from Microsoft-owned Mojang. One of the most well-known video games in the world, Minecraft has seen steady growth since its initial 2009 release, thanks to publisher Microsoft.
The notification of the ban not only ended several NFT initiatives based on Minecraft but also provided a strong argument for NFTs. Mojang also published a news report on planned modifications to its NFT usage policies. Blockchain technology will soon be prohibited on independent game servers managed by players and producers, and Minecraft will also forbid the usage of its visuals in NFT ventures or using Minecraft’s API.
Although the Minecraft team does not presently plan to include blockchain technology, Mojang stated that it might do so in the future.
According to Mojang, the speculative pricing and investment mindset around NFTs divert attention from actually playing the game and promote profiteering, which is contrary to the players’ long-term happiness and success.
Some autonomous Minecraft servers, according to the company, enable the usage of NFTs that represent in-game objects or give NFT incentives to gamers. Additionally, there are initiatives that have transformed Minecraft resources into NFT collectibles, such as the Polygon-based NFT Worlds, which has invested months in creating a whole crypto-economy by selling virtual Minecraft land plots.
Sources claim that after the news, NFT Worlds suffered around a 70% drop in the price of its NFTs. Nonetheless, the project’s developers insist that they won’t be leaving the community.
The team behind NFT Worlds has now revealed plans to develop a new game based on many of the fundamental elements of Minecraft but totally free of the restrictions Microsoft and Mojang impose on the game. According to NFT Worlds, their brand-new Minecraft-inspired game will be recognizable to users of the original game but with the modernity and ongoing development, it has been missing for years.
As the metaverse and Web3 gaming pick momentum, numerous more gaming industry titans are planning to adopt NFTs. Square Enix, a developer of video games in Japan, has revealed Final Fantasy VII as its first game to use NFTs after selling some of its intellectual property to invest in Web3-based gaming.
While Microsoft wishes to avoid becoming involved in blockchain gaming, its planned acquisition of Activision Blizzard, which would make it the third-largest gaming firm, is a little unexpected, given that Activision Blizzard’s World of Warcraft served as the inspiration for Vitalik Buterin’s creation of Ethereum.
On the other hand, despite not banning NFT-using games from its marketplace, Fortnite maker Epic Games claims it won’t be implementing them in any of its in-house games.
As NFTs and blockchain promise to break the monopoly of centralized game developing companies, it will be interesting to see how these companies make drawing board changes to remain undefeated in the face of disruption in the gaming industry.
On July 19, at the Moscow Chess Open competition, when a 7-year-old child attempted a quick move without allowing the rival robot enough time to complete its move at a tournament in Moscow, the robot physically assaulted the child. According to Sergey Smagin, vice president of the Russian Chess Federation, who talked to state-run news organization RIA Novosti, the child is alright, but one of his fingers has been broken.
The crane-like robot is shown reaching across the board and aggressively grabbing the boy’s finger for a few seconds in a video posted on the Telegram channel Baza before authorities hurried over and rescued the child. According to reports, the child, known as Christopher, is among the top 30 chess players in Moscow under the age of nine. While Christopher’s finger was put in a plaster cast, he was not overly traumatized by the attack. He displayed extraordinary tenacity for someone who had been assaulted by a robot by playing the next day and finishing the competition. He was able to sign paperwork and attend the awards ceremony.
According to reports, Christopher’s parents have been in touch with the district attorney’s office, but the Moscow Chess Federation says it is speaking with the couple and working to resolve the matter (i.e., convincing them not to press charges).
Meanwhile, Smagin believes that Christopher is primarily to be blamed for this rare incident. He said, “There are certain safety rules, and the child apparently violated them. When he made his move, he did not realize he first had to wait.” As per the robot’s designers, it was built with artificial intelligence that allowed it to play three chess games simultaneously. It has been playing chess for around 16 years and has participated in several ceremonial matches without ‘cracking any bones’ or causing serious physical harm to opponents, according to Sergey Lazarev, president of the Moscow Chess Federation.
While this is not the robot uprising movies like Avengers: Age of Ultron or Terminator prepared us for. However, it still begs similar questions about our reliance on robots. Will the robots’ allegiance lie with humans when they go rogue or acquire emotional intelligence? Is this a teaser for AI’s hostile robot takeover critics have warned us about? Or can it be deemed as an episode that serves as a reminder to engineers to rethink how to create robots that are less prone to mistakes that might endanger lives? At the same time, Smagin defended the robot’s AI skills by saying that the public has a misconception that technology-advanced AI will wipe out humankind.
Though eliminating humans sounds a bit stretched, we cannot also deny accidents where there was the loss of human life due to AI robots. For instance, At Volkswagen’s Baunatal production facilities in Germany, a stationary robot murdered a contractor in 2015 by grabbing and crushing him against a metal plate. In the same year, a worker was stabbed to death by one of the robots at the SKH Metals facility in Manesar, India, where he worked.
Most recently, a tragic Model S collision in Newport Beach that claimed the lives of three people when the car smashed into construction equipment prompted a new Federal investigation into Tesla’s Autopilot technology. Three construction workers were also struck by the EV, and they were sent to the hospital with less severe injuries.
These instances highlight the possibility that with or without human error, robots, whether powered by AI, are capable of violating Asimov’s First Law of Robotics, which promises no harm to human lives.
Steelmaker ArcelorMittal has selected Iris.ai to leverage its AI to advance its use of scientific research. The collaboration will focus on automating some of the former’s R&D processes and reducing patent research time by efficiently extracting and transforming experiment data.
ArcelorMittal, a Fortune Global 500 company, is a leading steel firm with a presence in more than 60 countries. It relies on its R&D function to enhance manufacturing, deliver sustainability commitments, and transform into an AI-focused company. The patent team, a part of its R&D, undertakes ongoing competitive research to identify gaps in the market and provide patentable solutions for potential infringements.
Sophie Plaisant, head of Intellectual Property for ArcelorMittal, said, “At ArcelorMittal, we are constantly looking for ways to optimize our R&D processes, and this is what brought us to Iris.ai. Integrating the Extract tool into our process has made the ingestion and processing of external data significantly easier.”
However, searching and extracting such data takes a lot of unskilled manual labor time from the skilled employees. After collaborating with Iris.ai, this information can be extracted in a few minutes. Iris.ai’s extraction tool offers 94% precision, estimated via a self-assessment module. The tool can assess its results, giving researchers a good confidence level.
Iris.ai’s extraction tool utilizes natural language processing (NLP) and machine learning (ML) to retrieve domain-specific elements from the data. This data can be entered into an Excel sheet, integrated lab tools, databases, etc.
Anita Schjoll Brede, CEO, and co-founder of Iris.ai, said, “Countless hours are being invested in corporate R&D functions around the world manually extracting data from tables. With our Extract tool, it’s possible to automate this manual work to free up time so researchers can focus on carrying out valuable research.”
A product search engine startup, Vetted, announced a US$14M Series A funding for its AI-powered platform for deals and product reviews. The cash infusion comes from a funding round led by Insight Partners and will be utilized in scaling Vetted’s machine learning technologies.
Stuart Kearney, a co-founder of Vetted, describes the platform as a search engine for discovering brands and products based on your requirements. He started the company along with Tim Etler and Tom Raleigh after realizing that online shopping had become an overwhelming customer experience.
Kearney said, “People shouldn’t have to spend hours sifting through indistinguishable products littered across thousands of ad-infested sites loaded with fake reviews and unreliable information.”
This is where Vetted comes in handy. It surfaces reviews for a given product. These reviews are picked from platforms like Reddit, The New York Times Wirecutter, YouTube, and many other reliable sites. The platform also compares product prices across merchants, tracks changes, and alerts when there are sales or discounts. Vetted ranks these products based on 10,000+ factors, including reviewer credibility, brand reliability, past generations, etc.
Vetted is available on the web and as a browser extension. It stands apart from other product comparison tools like Honey by PayPal, Paribus, etc., because of its use of AI to identify products from different categories. Around 30 “product experts” work with the technology to analyze price changes or updates to ensure accurate results.