Home Blog Page 225

Google to open largest Office outside of US in Hyderabad

Google largest office Hyderabad

Technology giant Google to open its largest office outside of the United States in Hyderabad, India. 

The information was revealed after the commencement of this project in the presence of IT Minister K.T. Rama Rao, IT Principal Secretary Jayesh Ranjan, and Google Country Head and Vice President Sanjay Gupta. 

According to Google, the new facility could accommodate thousands of people over the next few years. 

Read More: HPE unveils Two Revolutionary Solutions to Boost AI and ML adoption

The campus, built on 7.3-acre land in Gachibowli, would feature a massive 3.3-million-square-foot structure. Google bought this land back in 2019 and now plans to build its largest office there out of the US. 

“I am pleased that Google is deepening its roots in Hyderabad through this landmark building which incorporates sustainability into its design, keeping in mind Hyderabad’s large and future-focused talent pool,” said IT Minister K.T. Rama Rao. 

Google currently maintains roughly ten offices in Hyderabad, one of the country’s highest. With this new campus, Google wants to double its workforce in Hyderabad, which serves as a support center for Google Adwords, Gmail, Google Docs, Google Maps, YoutTube, etc. 

This new development is a step in Google’s future plans, and after the launch, the campus would provide a highly skilled workforce and a resilient cum adaptive workplace. 

Additionally, Google and the State government signed a Memorandum of Understanding (MoU) to strengthen their relations to pursue the Digital Telangana mission. 

As a part of the MoU, Google will team up with Telangana Academy for Skills and Knowledge (TASK) to offer Google Career Certificate Scholarships to individuals interested in industries such as IT, UX design, data analytics, and project management, etc. 

“Our previous MoUs with them have resulted in some great initiatives that have positively affected citizens from all walks of life. This time we are focusing on making a step-change in communities such as youth, women, and students and in citizen services,” added Rama Rao. 

Advertisement

HPE unveils Two Revolutionary Solutions to Boost AI and ML adoption

Machine Learning Development System , HPE Swarm Learning

In an effort to boost the adoption of artificial intelligence technologies in enterprise settings, Hewlett Packard Enterprise (HPE) has launched two new artificial intelligence (AI) solutions. One solution is introducing a decentralized machine learning system that allows remote or edge installations to communicate updates to their models, and the other is geared at helping companies develop and train machine learning models at scale.

The first solution, HPE Swarm Learning is a new AI solution from Hewlett Packard Enterprise that accelerates insights at the edge by sharing and unifying AI model learnings without sacrificing data privacy. These insights range from disease diagnosis to credit card fraud detection. 

HPE Swarm Learning is the first privacy-preserving decentralized machine learning solution for edge or dispersed sites, created by HPE’s R&D unit, Hewlett Packard Labs. Through the HPE Swarm API, the solution provides users with containers that can be effortlessly incorporated into AI models. Users may then share AI model learnings both within their company and with industry peers to enhance training without having to divulge actual data.

The majority of AI model training now takes place in a single location, using centralized integrated datasets. Due to the necessity to transport huge amounts of data back to the same source, this methodology can be inefficient and costly. It could also be inhibited by data privacy and ownership restrictions and regulations that restrict data exchange and mobility, resulting in inaccurate and biased models. In contrast, companies can make faster choices at the point of impact by training models and using insights at the edge, resulting in improved experiences and results.

HPE Swarm Learning is the only solution that allows enterprises to leverage dispersed data at its source to develop machine learning models that learn fairly while maintaining data governance and privacy. HPE Swarm Learning leverages Blockchain technology to securely enroll members, dynamically elect a leader, and combine model parameters to give robustness and security to the Swarm network, ensuring that only learnings acquired from the Edge are shared, rather than the data itself. In simpler words, HPE Swarm Learning works by establishing a peer-to-peer network between the nodes and ensures that model parameters can be safely transferred.

HPE Swarm Learning is available as part of a containerized Swarm Learning Library that can operate on Docker, within virtual machines, and is hardware agnostic. HPE also mentioned that TigerGraph is already using HPE Swarm Learning in conjunction with their data analytics platform to spot odd behavior in credit card transactions.

Hewlett Packard Enterprise also unveiled the HPE Machine Learning Development System, an end-to-end solution that combines a machine learning software platform, compute, accelerators, and networks to create and train more accurate AI models at a faster and larger scale. The new system relies on HPE’s acquisition of Determined AI to combine its comprehensive ML platform, now formally transitioned as the HPE Machine Learning Development Environment, with the world’s premier AI and HPC solutions. Apart from the HPE Machine Learning Development Environment training platform, it includes container management (Docker), cluster management (HPE Cluster Manager), and Red Hat Enterprise Linux in its software and services stack.

According to the company, customers can accelerate the traditional time-to-value for reaping benefits from developing and training machine models from weeks to days. Adopting infrastructure to support model creation and training at scale has traditionally been a lengthy, multistep procedure. This entails the acquisition, installation, and administration of a highly parallel software ecosystem and infrastructure.

The HPE Machine Learning Development System helps businesses avoid the high costs and complexity of implementing AI infrastructure by providing the only solution that combines software, and specialized computing like accelerators, networks, and services. Thus, allowing businesses to quickly build and train Optimized ML models at scale. In other words, this solution makes it easier for businesses to construct and train machine learning models at scale, allowing them to achieve value faster. With distributed training, automated hyperparameter optimization, and neural architecture search – all of which are fundamental to ML algorithms – it can expand AI model training with minimal code rewrites or infrastructure revisions, and help to increase model accuracy.

The core architecture is built on HPE Apollo 6500 Gen10 server nodes with eight Nvidia A100 80GB GPUs and Nvidia Quantum InfiniBand networking. Up to 4TB of RAM and 30TB of NVMe local scratch storage are available on Apollo nodes, with HPE Parallel File System Storage as an option. To manage the system, there are additional ProLiant DL325 servers that operate as service nodes and are connected to the enterprise network through an Aruba CX 6300M switch.

The platform includes both optimized compute as well as accelerated computing, and interconnectivity, all of which are critical performance drivers for scaling models for a variety of workloads, from a modest configuration of 32 GPUs to a larger configuration of 256 GPUs. The HPE Machine Learning Development System delivers around 90% scaling efficiency for workloads like Natural Language Processing (NLP) and Computer Vision in a modest configuration of 32 GPUs. In addition, internal testing shows that the HPE Machine Learning Development System with 32 GPUs is up to 5.7 times quicker for an NLP task than another offering with 32 similar GPUs but suboptimal interconnect.

Read More: Microsoft and HPE test AI on International Space Station

The HPE Machine Learning Development System is a fully integrated solution that includes pre-configured and installed AI infrastructure for rapid model development and training. HPE Pointnext Services will provide on-site installation and software configuration as part of the service, allowing users to instantly build and train machine learning models to generate faster, more accurate insights from their data.

HPE also revealed that Aleph Alpha, a German AI company, is using the HPE Machine Learning Development System to train its multimodal AI, which incorporates NLP and computer vision. The models push the boundaries of modern AI for all kinds of language and image-based transformative use cases, such as AI-assistants for the creation of complex texts, higher-level understanding summaries, searching for highly specific information in hundreds of documents, and leveraging of specialized knowledge in a conversational context, by combining image and text processing in five languages with almost human-like context understanding.

Aleph Alpha quickly set up the HPE Machine Learning Development System and began efficiently training in rapid time, merging and monitoring hundreds of GPUs. As per Jonas Andrulis, Founder, and CEO of Aleph Alpha, the company was pleasantly surprised by the HPE Machine Learning Development System’s efficiency and performance of more than 150 teraflops. Aleph Alpha swiftly set up the system and began training its models in hours rather than weeks.

“While running these massive workloads, combined with our ongoing research, being able to rely on an integrated solution for deployment and monitoring makes all the difference,” says Jonas.

Both solutions are currently available to users. Swarm Learning, can also be coupled with the Machine Learning Development System.

Advertisement

Bengaluru boy develops India’s first AI, sensor-based bicycle counter

Bengaluru AI sensor bicycle counter

An eighteen-year-old boy from Bengaluru develops India’s first artificial intelligence (AI)-powered sensor-based bicycle counter. 

According to Nihar Thakkar, the creator of this AI system, the technology is an electronic device that counts cycles at a specific location for a set period and is installed to encourage cycling. 

In collaboration with the Directorate of Urban Land Transport (DULT) and Doddanekundi residents, the newly developed cycle counter has been installed near the Outer Ring Road cycle lane in Doddanekundi, Bengaluru. 

Read More: Meta says its new AI can formulate Sustainable Concrete

“This is the first live bicycle counter in the country and will help collect usage data of bicycles using the dedicated lane, which will make the impact of the cycle lane more visible and encourage cycling. The live bicycle counter is installed under the SuMA (Sustainable Mobility Accord) project being funded by DULT as a pilot project,” said Thakkar to The IndianExpress. 

He further added that the counter can distinguish between bicycles and all other vehicles and will only count cyclists in both directions on the cycle lane and the service road. Moreover, the bicycle counter is powered by an AI-sensor camera and a machine learning algorithm that can use the gathered data to promote cycling and build more dedicated bike lanes. 

Apart from creating this AI-powered bicycle counter, Nihar Thakkar also founded Urban Flow. Nihar mentioned Urban Flow develops AI-powered tools for cycle lane enforcement, traffic analysis, and live bicycle counters. 

“At Urban Flow, we are building AI-based tools that will help the enforcement of traffic rules. Most of the dedicated cycle lanes are being misused by two wheelers,” said Thakkar. He also mentioned that the technology uses an artificial intelligence-enabled camera that processes video footage in real-time to collect and send evidence of traffic violations to authorities.

Advertisement

Research Team Develops TinyML neural network, FOMO for real-time object detection

tinyml

Researchers at Edge Impulse, a platform that builds ML models for the edge, have created a novel machine learning solution that enables real-time object detection on devices with limited CPU and storage capacity. The solution is called Faster Objects, More Objects (FOMO), a novel deep learning architecture having the potential to open up new computer vision applications.

While cloud-based deep learning solutions have witnessed tremendous acceptance, it isn’t suitable for every circumstance as many apps need inference on the device. For example, internet connectivity is not assured in some situations, such as drone rescue operations. Besides applications that require real-time machine learning inference, the delay induced by the roundtrip to the cloud can prove risky. Hence edge devices promise to resolve the latency issues faced with the cloud. However existing traditional deep learning algorithms may fail to work on resource-constrained edge devices.

TinyML is an approach for enhancing machine learning models for resource-constrained embedded devices. These embedded devices are powered by batteries and have minimal computing power and memory. Models are transformed and optimized for use on the smallest unit of an electrical device, the microcontroller, using TinyML as traditional machine learning models will not work on these devices.

The ability to run a deep learning model on such a small device contributes significantly to the democratization of artificial intelligence. Edge computing devices have been created to fulfill the requirement for computational capacity closer to endpoints as a result of the internet of things (IoT). Even if there is no connectivity, TinyML machine learning algorithms can be applied to small, low-power devices that lie at the end of an IoT network. It assures that devices can process data in real-time, right where it is produced, and that they can identify and respond to problems in real-time, independent of both latency and bandwidth. Data processing at the source of data generation also secures it from hackers who target centralized data stores.

In hindsight, making TinyML more available to developers will be crucial in the future for turning waste data into meaningful insights and developing innovative applications across a variety of sectors.

The memory and processing requirements of most object-detection deep learning models exceed the capabilities of tiny CPUs. On the other hand, FOMO only takes a few hundred kilobytes of memory, making it ideal for TinyML.

FOMO is based on the premise that not all object-detection applications need the high-precision output that state-of-the-art deep learning models can give. Users can scale deep learning models to very small sizes while making them effective by identifying the most appropriate tradeoff between accuracy, performance, and memory. 

This is a remarkable milestone because, unlike image classification, which focuses on predicting the existence of a certain type of item in an image, object classification requires the model to recognize more than one object as well as the bounding box of each occurrence. As a result, object recognition models are far more challenging than image classification deep learning neural networks and require considerable processing power.

FOMO anticipates the object’s center rather than bounding boxes. This is due to the fact that many object identification programs are only concerned with the placement of items in the frame rather than their sizes. Detecting centroids is substantially more computationally efficient than bounding box prediction and requires fewer data.

FOMO also boasts of structural upgrades compared to conventional deep learning architectures. The team explains that single-shot object detectors are comprised of many fully-connected layers that predict the bounding box and a collection of convolutional layers that extract features. The convolution layers in a neural network work in a hierarchical order to extract visual features. The first layer identifies basic objects in various directions, such as lines and edges. Each convolutional layer is frequently paired with a pooling layer, which reduces the output of the layer while keeping the most important features in each area.

After that, the output of the pooling layer is passed into the next convolutional layer, which extracts higher-level features like corners, arcs, and circles. The feature maps zoom out and can recognize intricate things like faces and objects when more convolutional and pooling layers are added. Finally, the fully connected layers flatten the final convolution layer’s output and attempt to forecast object class and bounding box.

Read More: MIT researchers develop an AI model that understands object relationships

In the case of FOMO, the fully connected layers and the last few convolution layers are dropped. This reduces the neural network’s output to a smaller version of the original image, with each output value representing a tiny patch of the original image. After that, the network is trained using a unique loss function such that each output unit can predict the class probabilities for every corresponding patch in the input picture. The result is essentially a heatmap for different categories of objects. 

This approach is different from pruning, a popular type of optimization algorithm that compresses neural networks by omitting parameters that are not relevant to the model’s output.

FOMO’s architecture offers a number of major advantages. For starters, FOMO works with existing architectures like MobileNetV2, a popular deep learning model for image classification on edge devices. 

FOMO also reduces the memory and compute needs of object detection models by dramatically reducing the size of the neural network. It is 30 times quicker than MobileNet SSD and can run on devices with less than 200 kilobytes of RAM, according to Edge Impulse.

There are certain bottlenecks that also need to be taken care of. For instance, FOMO succeeds if objects are of the same size. However, this may not be the case if there is one very large object in the foreground and many small objects in the background. Additionally, if objects are too close together or overlap, they will occupy the same grid square, lowering the object detector’s accuracy. Though, to some extent, you may get around this limitation by decreasing FOMO’s cell size or boosting the image resolution. The team at Edge Impulse claim FOMO is very effective when the camera is in a fixed place, such as scanning products on a conveyor belt or counting automobiles in a parking lot.

At present, the Edge Impulse team hopes to improve on their work in the future, such as downsizing the model to under 100 kilobytes and improving its transfer learning capabilities.

Advertisement

SparkCognition and University of Texas at Austin partners for AI and Robotics

SparkCognition partners University of Texas AI Robotics

Artificial intelligence technology startup SparkCognition partners with the University of Texas at Austin to advance in AI and robotics. 

The primary motive of this new partnership is to advance artificial intelligence in robotics and its applications in the industry. 

As a part of this collaboration, Texas Robotics will use SparkCognition’s HyperWerx facility for development and education to bring the physical world together with cutting-edge AI technology. 

Read More: IIT Jodhpur Researchers develop AI algorithm to detect Cataracts

The HyperWerx facility, located on 50 acres in the greater Austin area, is an AI testing ground for robotics, unmanned aerial vehicles (UAVs), factory automation, oil and gas exploration, etc. 

According to the company, its HyperWerx campus is a one-of-a-kind space dedicated to designing, developing, experimenting, and commercializing AI-powered physical solutions. 

Dr. Peter Stone, Director of Texas Robotics and a Professor in the University’s Department of Computer Science, said, “HyperWerx allows us the opportunity to evaluate our robotics innovations via hands-on experiments under realistic conditions, thus enriching our understanding of what these systems are capable of, as well as facilitating educational experiences.” 

He further added that commercial advances in AI and robotics bring capabilities to the market that promise to make workplaces safer, more sustainable, and more productive for everyone. 

Texas Robotics represents researchers and students from UT Austin’s Cockrell School of Engineering, College of Natural Sciences, and Department of Computer Science, among other departments and colleges. UT Austin students will now access industry and technical specialists and the newest aerial and terrestrial development equipment at HyperWerx. 

Chief Science Officer at SparkCognition, Professor Bruce Porter, said, “Texas Robotics is a great representation of the university’s dedication to innovation in robotics and the use of artificial intelligence to fuel industrial change.” 

He also mentioned that the university could speed their research through physical experimentation and close collaboration by bringing this investment in innovation to their HyperWerx facility, ultimately driving the commercialization of robotics into society.

Advertisement

Researchers Develop Neural Network Model that Can Read Human Faces

neural network Stevens Institute of Technology

Researchers at Stevens Institute of Technology have taught an AI system to model first impressions and utilize facial images to correctly forecast how individuals will be perceived, in collaboration with Princeton University and the University of Chicago. Their work, which was published in PNAS, introduces a neural network-based model that can predict with surprising precision the arbitrary judgments people would make about individual photos of faces.

When two individuals meet, they make quick judgments about anything from the other person’s age to their IQ or trustworthiness based purely on how they appear. First impressions could be tremendously powerful, even though they are frequently erroneous, in shaping our relationships and influencing anything from hiring to criminal punishment. There is also enough psychological research that backs the notion that such judgments often make us biased affecting decision-making and thought processes. 

Thousands of individuals were asked to score over 1,000 computer-generated photographs of faces based on qualities such as how intelligent, electable, religious, trustworthy, or outgoing the subject of the photograph appeared to be. The data was then used to train a neural network to make similar quick decisions about people based merely on images of their faces. Jordan W. Suchow, a cognitive scientist and AI expert at Stevens School of Business, led the team, which also comprised Princeton’s Joshua Peterson and Thomas Griffiths, and Chicago Booth’s Stefan Uddenberg and Alex Todorov.

Using deep learning to predict users’ superficial judgements of human faces
Credit: Peterson et al.

In recent years, computer scientists have created a variety of complex machine learning models that can analyze and categorize vast quantities of data, accurately anticipate certain occurrences, and generate images, audio recordings, and texts. However, in reviewing past research on human facial expressions based judgments, Peterson and his colleagues observed that relatively few studies used state-of-the-art machine learning technologies to investigate this area. According to Suchow, the team combines this with human facial expression assessments and employs machine learning to investigate people’s biased first impressions of one another.

According to Suchow, the team can use this algorithm to predict what people’s first impressions of you would be and which preconceptions they will project onto you when they see your face, using just a photo of your face.

Many of the algorithm’s observations correspond to common intuitions or cultural assumptions, e.g., persons who smile are perceived as more trustworthy, while people who wear glasses are perceived as more intelligent. In addition, it’s difficult to explain why the algorithm assigns a certain feature to a person in other instances.

Suchow clarifies that the algorithm does not provide targeted feedback or explain why a certain image elicits a specific opinion. However, it can assist us in comprehending how we are seen. “We could rank a series of photos according to which one makes you look most trustworthy, for instance, allowing you to make choices about how you present yourself,” says Suchow.

The new algorithm, which was created to assist psychology researchers in creating face images for use in trials on perception and social cognition, might have real-world applications. The team pointed out that generally people carefully construct their public personas, for example, posting only photos that they believe make them appear bright, confident, or attractive, and it’s simple to see how the algorithm could help with that. They noted there is already a societal norm around portraying yourself in a favorable way, resulting in avoiding some of the ethical issues surrounding the technology. 

On the malicious side, the algorithm can be used to manipulate images to make its subjects seem in a certain manner, such as making a political candidate appear more trustworthy or their opponent appear stupid or suspicious. While AI techniques are already being used to make “deepfake” films depicting events that never occurred, the team fears their new algorithm might discreetly alter real images to influence the viewer’s perception of their subjects.

Read More: Clearview AI to Build a Database of 100 billion facial Images: Should we be worried?

Therefore to ensure the neural-network-based algorithm isn’t misused, the research team has obtained a patent to protect its technology and is currently forming a startup to license the algorithm for pre-approved ethical objectives. 

While the current algorithm focuses on average responses to a particular face over a wide group of viewers, the research team intends to build an algorithm that can anticipate how a single person will react to another person’s face in the future. This might provide significantly more insight into how quick judgments impact our social interactions, as well as potentially aid people in recognizing and considering alternatives to their first impressions when making crucial decisions.

Advertisement

Bihar Police to use AI tools to bust Illegal Liquor Rackets

Bihar Police AI illegal liquor

Bihar police announce that it plans to use artificial intelligence (AI) tools to bust illegal liquor rackets active in the dry state. 

According to a senior police official, the AI mechanism will digitize and automate all processes, removing the need for the force to manage data manually. 

The state’s liquor prohibition law, which went into effect in April 2016, prohibits the manufacture, sale, and use of alcoholic beverages. 

Read More: Yellow.ai launches pre-built Dynamic AI Agents to deliver faster time-to-market

However, several reports and incidents suggest that the law is not being followed as numerous complaints of illegal liquor sales have emerged across the state. Therefore, this AI-powered tool will help the police department to hunt down the law violators. 

“Once introduced, it will help policemen nab gangs or individuals involved in illegal liquor trade in the dry state. It will be easy to identify their area of operations with real-time analytics and automated processes,” Kamal Kishor Singh, Additional Director General of the State Crime Records Bureau, told PTI. 

He further added that law enforcement organizations are already utilizing AI in a variety of ways across the country. 

Recently, Kolkata Police also started testing an artificial intelligence tool to track and charge traffic norms violators. Officials said that the software has been uploaded to the system of LalBazar control room, which is the headquarters of Kolkata Police. 

Apart from law enforcement, AI is also being adopted in various government schools in states like Assam, Tamil Nadu, and some parts of Uttar Pradesh to mark students’ attendance, lowering the burden on teachers. 

Singh mentioned that nearly 2,000 officers and workers, including IT inspectors and constables, will be part of the proposed cadre, and the AI system operations will be handled by officials from the IT cadre. 

Singh said, “From the perspective of crime handling and management, the AI tools will help in exploratory analysis. All documents, including criminal records, would be scanned and digitized, aiding the force on the ground.” 

He also mentioned that predictive policing using AI techniques would assist the police force in predicting the types of crimes that may occur in a given region, along with the potential perpetrators.

Advertisement

Yellow.ai launches pre-built Dynamic AI Agents to deliver faster time-to-market

yellow.ai dynamic ai agents

Customer experience automation platform Yellow.ai launches its new pre-built dynamic AI agents to deliver faster time-to-market. 

The addition of pre-trained, ready-to-deploy vertical artificial intelligence agents is intended to help organizations speed their TX automation process, allowing companies to drive innovations. 

Companies can also use Yellow.ai’s pre-built Dynamic AI agents for employee experience (EX) to automate end-to-end EX activities, such as onboarding, training, and other HRD operations. 

Read More: Meta says its new AI can formulate Sustainable Concrete

Yellow.ai says that the pre-trained AI agents considerably boost employee productivity by up to 30% and employee satisfaction by up to 40%. This is made possible by enabling smooth connections with existing HCMs and ITSMs. 

Earlier this year, Yellow.ai was recognized in the 2022 Gartner Magic Quadrant for Enterprise Conversational AI Platforms for its unmatched capabilities. Many industry-leading organizations like Domino’s, Sephora, Hyundai, Biogen International, and Edelweiss Broking use Yellow.ai’s solution for customer communication. 

According to Yellow.ai, it aims to have over 100 pre-built accelerators on its Marketplace by the end of the second quarter of 2022. 

CPO and Co-founder of Yellow.ai Rashid Khan said, “To address the evolving needs of customers and employees, enterprises today prefer Total Experience automation solutions that deliver results in no time. With our pre-built, vertical Dynamic AI agents, we aim to enable them through easy to use, pre-trained customizable models that deliver accuracy, speed to value, and consistency specific to their business needs.” 

He further added that one of the largest automobile manufacturers was able to automate the end-to-end purchase cycle for their end consumers and increase month-over-month customer engagement rates by 300% using Yellow.ai’s verticalized solutions. 

In addition to the pre-built dynamic AI agents, Yellow.ai also announces the support for WhatsApp 24hr window expiry and Video Calling functionality on the cloud in its existing omnichannel solution. Interested users can visit the official website of Yellow.ai to request a demo of the platform. 

Advertisement

IIT Jodhpur Researchers develop AI algorithm to detect Cataracts

IIT Jodhpur AI detect Cataracts

Researchers of the Indian Institute of Technology (IIT) Jodhpur have developed a novel artificial intelligence (AI) algorithm that can accurately detect cataracts. 

The researchers discovered that eye images captured by low-cost near-infrared (NIR) cameras could help in low design costs, ease of use, and practical cataract detection solutions. 

Therefore, the AI solution developed by IIT Jodhpur researchers can make the diagnosis and detection of cataracts more accessible and affordable for the public. The research has been published in the Computer Vision and Image Understanding journal. 

Read More: Microsoft identifies New Privilege Escalation Flaws in Linux Operating System

According to the study, the proposed method can be used in rural areas where doctors are scarce. The traditional procedure for cataract detection involves using costly ophthalmoscopes that capture fundus images and can only be operated by experienced professionals, making the processes extraordinarily technical and challenging to carry out. IIT Jodhpur’s solution to this problem can revolutionize cataract detection. 

The researchers mentioned, “Known as MTCD, the proposed multitask deep learning algorithm is inexpensive and results in very high levels of accuracy. This research presents a deep learning-based cataract detection method that involves iris segmentation and multitasks network classification.” 

They further added that the proposed segmentation algorithm detects non-ideal eye boundaries efficiently and effectively. 

Dr. Mayank Vatsa and Dr. Richa Singh from IIT Jodhpur’s Image Analysis and Biometrics (IAB) Lab theorized the study, which was supported by UG and Ph.D. students Mahapara Khurshid, Yasmeena Akhter, Rohit Keshari, Pavani Tripathi, and Aditya Lakra. 

“We are extending this research to include both cataract and diabetic retinopathy in the solution and have collaborated with multiple hospitals in the country for domain expertise, data collection, and validation of the solution,” said Dr. Mayank Vatsa. 

Advertisement

Meta says its new AI can formulate Sustainable Concrete

Meta AI formulate Sustainable Concrete

Meta, formerly known as Facebook, claims that it has created a new artificial intelligence (AI) model that can effectively formulate sustainable concrete. 

Meta AI researchers have collaborated with the University of Illinois, Urbana-Champaign, to develop this novel AI that can design and revise formulas for increasingly high-strength, low-carbon concrete. 

Concrete, which accounts for 8% of global carbon emissions, is one of the highest contributing sources of carbon emissions in the world. As a result, lowering concrete emissions will have a far-reaching influence. 

Read More: Nitin Gadkari invites Tesla to manufacture EVs in India

Researchers used the Concrete Compressive Strength data set, freely available from the UCI Machine Learning Repository, to train this AI model. The AI model generated multiple new concrete mixes using the input data on concrete formulas, along with their compressive strength and carbon footprint. 

According to Meta, the database used to train the AI model contains 1,030 instances of concrete formulas and their validated attributes, which includes seven-day and 28-day compressive strength data. 

“The embodied carbon footprint associated with the concrete formulas was derived using the Cement Sustainability Initiative’s Environmental Product Declaration (EPD) tool. EPDs are a standardized way of accounting for the environmental impacts of a product or material, including carbon emissions over its life cycle,” mentioned Meta in the blog

Researchers had to spend only about a week on the refinement of the AI model. Post refinement, the concrete formula designed by the model met or surpassed all of the standards while replacing up to 70% of the cement with low carbon-emitting substitutes called fly ash and slag. 

“We wanted to test the formulas in the field and selected our data center in DeKalb, Illinois, as an ideal location, given its proximity to the University of Illinois at Urbana-Champaign,” said Meta. 

Advertisement