Home Blog Page 112

Tesla’s new wireless charging platform can charges three devices at once

Tesla's wireless charging platform

Tesla has introduced its wireless charging platform, a Qi-based wireless charging solution that is apparently inspired by its future Cybertruck. It can charge wirelessly up to three devices at once, each receiving up to 15W of electricity. 

The Tesla wireless charging platform has drawn inspiration from AirPower’s FreePower technology, which can charge an accessory or a phone wirelessly without the exact alignment of the devices to the charging pad. 

Tesla’s wireless power source has an Alcantara surface that protects the phones from scratches. It also contains a detachable magnetic stand that lets users put the phone flat or at an angle and includes a 65W adaptor to power the wireless charging pad.

Read More: Wipro Acquires Pune-Based IoT Startup Linecraft.Ai

The pad’s elegant design enables devices such as a charging case for TWS earphones and two cellphones to be put next to each other. According to Tesla, the exact alignment of the devices is not necessary, and they may be positioned wherever on the charging surface.

The Tesla wireless charging platform is available for pre-order on Tesla’s web store for $300. The device will begin delivering in February 2023.

Advertisement

Fortuna: Amazon unveils a new library for developing uncertainty quantification of ML models

Last week, Amazon unveiled its new open-source library, Fortuna, for uncertainty quantification of machine learning models. Fortuna offers calibration methods like conformal prediction to train any neural network to obtain calibrated uncertainty estimates. The library also supports several bayesian inference methods for deep neural networks written in Flax.

Accurate estimation of predictive uncertainty is vital for applications that involve critical decisions. Uncertainty enables data scientists to evaluate the reliability of model predictions, defer to human decision-makers, or detect if a model can be deployed safely. Fortuna makes it easy for them to run benchmarks and build robust and reliable AI models with advanced uncertainty quantification techniques.

Read more: Perfect raises $13M in seed funding

The existing libraries and tools for uncertainty quantification have a limited scope and do not provide a depth of techniques in a single place. This results in hindering the adoption of uncertainty into production systems. To solve this problem, Amazon developed Fortuna, which brings together prominent methods like conformal prediction, bayesian inference, and temperature scaling across the literature to users with a standardized interface.

Fortuna is readily available on Github, with examples, documentation, and references. Fortuna was developed by a group of applied scientists — Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, and Cedric Archambeau.

Advertisement

North Korean hackers stole NFTs through 500 phishing domains

North Korean lazarus hackers stealing NFTs using nearly 500 phishing domains

The notorious Lazarus Group of North Korea, known for instigating cyberattacks, has once again drawn attention after launching two consecutive attacks on the NFT industry. Around 500 phishing domains have been developed by the group, who are utilizing them to deceive unwary victims who are also devoted NFT buyers. 

On December 24, the blockchain security company SlowMist published a report outlining the tactics used by North Korean Advanced Persistent Threat (APT) groups to separate NFT investors from their NFTs, including fraudulent websites impersonating various NFT-related platforms and projects. These fraudulent websites, which imitate well-known NFT markets like OpenSea, X2Y2, and Rarible, include one that pretends to be a World Cup project and others that counterfeit other well-known NFT projects. 

According to SlowMist, one of the tactics employed by fake websites was to provide “malicious Mints,” which include tricking consumers into thinking they are minting a valid NFT by linking their wallet to the website. But since the NFT is basically a scam, the victim’s wallet is now open to the hacker who has now gained access to it. 

The report also mentioned that a large number of phishing websites shared the same Internet Protocol (IP), with 372 NFT phishing websites sharing a single IP and another 320 NFT phishing websites using a different IP. SlowMist revealed that the phishing campaign had been underway for some months, with the first registered domain name coming roughly seven months ago.

An example phishing website Source: SlowMist

Other phishing tactics employed included capturing visitor data and storing it on external sites and tying photos to specific projects. Once the hacker had the visitor’s data, they would then use a wide range of attack scripts to target the victim, giving them access to their plug-in wallets, authorizations, and access records as well as sensitive information like their approve record and sigData. The hacker could then access the victim’s wallet using all this information, compromising all of their digital assets.

Read More: Meta takes down 40 phishing accounts by CryperRoot Risk Advisory

As the research only looked at a tiny percentage of the materials and just “some” of the phishing traits of the North Korean hackers were recovered, SlowMist emphasized that this is only the “tip of the iceberg.” Hopefully, more information regarding these attacks will surface in the coming weeks.

Advertisement

Wipro acquires Pune-based IoT startup Linecraft.Ai

Wipro acquires Linecraft.Ai

According to a LinkedIn post, Wipro Infrastructure Engineering has signed an agreement to acquire a Pune-based Internet of Things (IoT) product startup in the manufacturing sector called Linecraft.Ai.

According to the CEO of Wipro Infrastructure Engineering, Pratik Kumar, this acquisition will further help the company expand its digital capabilities and offer excellent automation solutions with a bolt-on digital layer that provides deep analytics and insights to customers. 

Linecraft.Ai utilizes the power of both machine learning and automation domain expertise to empower manufacturers to get more quality and productivity, improving operational efficiency in real-time.

Read More: Google Grants US$1m To Wadhwani Institute For Artificial Intelligence For AI In Agriculture

The software is built by a team of machine learning experts with a patented algorithm. It has been proven with the successful implementation of complex automated manufacturing assembly lines in the industry globally. 

Wipro PARI is now amongst the most prominent global automation companies with a strength of about 1200+ employees and three facilities worldwide. Its headquarters are in Pune, India.

Advertisement

ftNFT Opens a Real-World NFT Store in Dubai

softconstruct ftnft

The non-fungible tokens marketplace ftNFT plans to bring crypto and NFT enthusiasts together with its real-world NFT store in Dubai, Mall of Emirates. The store is a part of ftNFT’s expanded virtual ventures to get more artists and influencers on board with the digital currency. 

SoftConstruct ftNFT is a premium NFT marketplace backed by SoftConstruct and a part of the Fastex Web3 ecosystem that supports USDT, BTC, ETH, DOGE, XRP, and BNB. It stands out because of its unique ftNFT Drop feature, a first-of-its-kind dropping feature that allows users to win rewards. 

Vigen Badalyan, SoftConstruct’s co-founder, said, “SoftConstruct unveils ftNFT physical stores located in Dubai Mall and Mall of the Emirates — the city’s most iconic retail and entertainment destinations, welcoming over 130 million visitors yearly.”

Read More: Fashion X AI Show In Hong Kong Features Fits Created By AiDA

The NFT store will allow people to interact and experience modern-age services in a whole new way. The store will host collections created by AKNEYE, Chiko & Roko, Amrita Sethi, and other potential NFT artists. 

SoftConstruct ftNFT also offers services like customizing NFTs that may be experienced digitally or through the five senses and a 3D scanner to make avatars for virtual reality enthusiasts. A significant portion of the store will be devoted to making customers learn about Web3, NFTs, products, and services supplied on-site, with a Fast Desk to answer their queries.

Advertisement

Fashion X AI Show in Hong Kong Features Fits Created by AiDA

fashion ai aida

An AI fashion event, Fashion X AI, featured several designer-led AI fits created by an AI-based Interactive Design Assistant, or AiDA. With over 80 outfits and 14 designers, the event focused on creating outfits using software developed by Ph.D. students and experts from AiDLab, Hong Kong. 

AiDLab’s CEO, Calvin Wong, said the idea was to educate people and fashion designers about artificial intelligence as a “supporting tool.” With AiDA, designers can benefit from several technologies like image recognition, image generation, detection, and more. The software’s algorithm also generates blueprints for future reference and enables the designers to add their sense of taste to the outfits.

Wong added that AiDA could generate more than a dozen fashion blueprints within ten seconds, saving a lot of designing time.

Read More: Meta OPT-IML: a Large Language Model with 175B Parameters

Cynthia Tse, a renowned merchandiser, said that she felt this is what the future of fashion looks like. She added that facial covering looked alien-like, yet intriguing. Another Hong Kong-based fashion designer, Mountain Yam, has been using AiDA for over six months and feels that it is very inspirational. 

Yam said that a designer’s relationship with AiDA is like a romantic one where the former gradually gets to know her [AiDA], and she [AiDA] gets to know how the designer thinks. He added, “In accordance with my lines, styles, and databases, the system will propose something for me that I may not have ever considered, but she (AiDA) thinks is suitable for me.”

The fashion maven AiDA was launched at the Fashion X AI show and is still a work in progress—the developers will continue working on it and making it more beneficial to the fashion industry.

Advertisement

Meta OPT-IML: a Large Language Model with 175B Parameters

meta opt iml instruction tuning

Over the past few years, many language models, like GPT-3, LaMDA, etc., have made it into the news because of their capabilities across multiple tasks, like text generation, problem-solving, sentiment analysis, and much more. Many companies are actively exploring the domain to harness this potential. Meta is one of the most successful companies working on natural language processing (NLP) technology, specifically for systems with over 100 parameters. These systems, called large language models, can potentially transform artificial intelligence research in the language and conversation domain. In the most recent development, Meta has introduced a new large language model (LLM) called OPT-IML.

As per Meta’s latest research, fine-tuning language models via pre-determined instructions can potentially enhance their functionality for newer tasks. However, there is a lack of understanding among researchers about instruction tuning. Following the same line of research, Meta created OPT-IML as a benchmark for fine-tuning large language models while scaling both the model and benchmark sizes. OPT-IML, or OPT for Instruction Meta-Learning, is a model of 2000 NLP tasks as a benchmark framework for evaluating all model generalizations. Using this benchmark, Meta presents how instruction-tuning works for OPT-30B and then for OPT-175B.

Core Technology: Instruction Fine-Tuning

Instruction fine-tuning entails optimizing LLMs using input formats designed for instruction on various NLP tasks. Researchers consolidated eight meta-datasets in a collection of 1,991 NLP tasks to put the technology into use. These tasks and instructions were stored in more than 100 collections and later transformed into the evaluating framework, as seen in the image below:

OPT-IML was instruction-tuned across three levels of generalizations: 

  • Model performance on fully-held-out task categories not used for tuning.
  • Model performance on unseen tasks seen during instruction tuning (partially supervised).
  • Model performance on held-out instances of tasks seen during tuning (fully supervised).

The last condition assesses the generalization of supervised multi-task learning, while the first two assess cross-task generalization of instruction tuning. The diversity and distribution of tuning tasks, the manner of their prompts, and the targets used for fine-tuning all impact how effective instruction-tuning is for LLMs. The researchers found that instruction tuning is preferred only when it improves the performance of fully-held out and partially supervised tasks without compromising on fully-supervised task performance.

From thereon, researchers finetuned OPT similarly to a next-word prediction objective that is pre-trained based on previously inputted tokens. The training sequence was bifurcated into a source and a target sequence, out of which the researchers minimized the loss terms in the target sequence. Formally, fine-tuning a dataset D, containing instances sᵢ and their corresponding target tokens tᵢ = {tᵢⱼ}. Say that the model is pre-trained with parameters 𝛩, the researchers minimize the following loss function:

In the following steps, researchers fine-tuned the hyperparameters for all 30B models on 64 A100 Tensor Core GPUs and 175B models on 128 A100 Tensor Core GPUs. Ultimately, each 30B model is tuned for 4000 steps, and 175B models for double the number of steps with half the batch sizes. 

The evaluation datasets comprised tasks with answer options and those without options. For the former category, the researchers used score-based classification of tasks based on the likelihood of an output. For the latter category, researchers decode a token until a maximum of 256 tokens are predicted. Lastly, researchers examined the results and concluded that OPT-IML outperforms standard OPT models (not fine-tuned) on all benchmarks. It is a specific outcome as standard OPT prompts take on from GPT-3 undergoing prompt engineering. At the same time, OPT-IML (with fine-tuning) enhances robustness and reduces the need for prompt engineering. 

While the model is a significant improvement for instruction-tuning technology, it is yet subjected to some limitations. The evaluation framework used to tradeoff instruction-tuning variables may interact, resulting in different tuning settings. Additionally, all variable tradeoffs studied on 30B models may waver off from the usual trends at larger scales. 

However, the future scope of instruction-tuning can benefit many areas, including multi-task learning, meta-training, and prompting. In multi-task learning (MTL), instruction tuning enhances generalization performance for fully-held-out tasks. Instruction tuning also betters prompting techniques that have become a dominant paradigm in recent years. From hereon, the researchers will continue to work on instruction tuning LLMs to improve their generalization abilities and find more applications for the technology.

Advertisement

Google Grants US$1m to Wadhwani Institute for Artificial Intelligence for AI in Agriculture

wadhwani ai google grant

The Wadhwani Institute for Artificial Intelligence announced a US$1m grant from Google to develop AI-based solutions for agriculture. The institute will use the grant to develop AI-powered tools that would give farmers rapid, accurate, and localized weather, crop, and other farm-related information while also enhancing the effectiveness and capabilities of India’s agriculture knowledge systems.

Google India’s Research Director, Manish Gupta, highlighted that over half the Indian population relies on the agricultural sector for their livelihood. However, the sector is further away from technological development that can lead to “pervasive benefits.”

He added, “We’re therefore happy to announce a new 1 million dollar grant to Wadhwani AI via Google.org to support their deployment of AI-based solutions to improve agricultural outcomes.” Creating more tech-driven solutions can help harness India’s agriculture dividend due to different terrains. 

Read More: MeitY AI Pe Charcha: Importance of Quality Datasets for AI

Small farmers rely largely on government and nonprofit organizations for guidance on what to do at each stage of the crop cycle. Shekar Sivasubramanian, CEO of Wadhwani AI, said that the company’s solutions ensure that all communities and farmers can meaningfully work with them through the government’s support. 

He added, “It [the grant] will help us to identify and address multiple challenges affecting agriculture in India through the use of timely and effective AI-based interventions.”

Advertisement

Delhi has the highest crypto adoption in India, says CoinSwitch report 

Delhi highest crypto adoption in India

In India, Delhi has the highest crypto adoption in terms of value invested, says the annual investor report by CoinSwitch. It was followed by Bangalore and Hyderabad. While in the tier-2 cities, Jaipur saw the highest adoption of cryptocurrency, followed by Lucknow and Pune. 

The report titled ‘India’s Portfolio 2022’ provided insights into how the country has responded to a year of landmark events such as the Ethereum Merge, rising inflation, and market downturns.

Among the metros, the city with the highest crypto adoption was Delhi, with 7.87 percent, while the tech hubs, Bengaluru and Hyderabad, were at 4.87 percent and 3.27 percent, respectively. 

Read More: Google Appeals Against ₹1337.6 Crores Fine Imposed By Competition Commission Of India

Among the tier-2 cities, the pink city, Jaipur’s crypto adoption was recorded at 3.04 percent, followed by Lucknow at 2.02 percent, and Pune stood at 1.75 percent. It was also revealed that the country is most active in the crypto market during post office hours, i.e., between 6 pm and 9 pm. Further, Bitcoin will continue to be the most popular coin for retail investors in 2022.  

Dogecoin and Ethereum were other popular crypto coins held by the Indians. Interestingly, Jaipur allocated the highest share of its investment in meme coins compared to the rest of India.

Also, eight percent of the crypto investors in India are female, and the investing patterns followed by Indian men and women were broadly similar. Further, it was revealed that more than 60 percent of crypto investors in Gurugram are under 25 years of age.

Advertisement

MeitY AI Pe Charcha: Importance of Quality Datasets for AI

meity AI pe charcha datasets

The Ministry of Electronics and Information Technology (MeitY) organized a session to discuss the vitality of the quality of datasets used to train artificial intelligence (AI) models. Shri Abhishek Singh, the President and CEO of NeGD (National eGovernance Division), chaired the session and hosted many industry professionals–AI enthusiasts, AI practitioners, and government officials. 

The session started with discussions on data for AI, key initiatives of the Indian government to work on dataset quality, and the National Data Governance Framework policy. Others, like Shri Srikanth Velamakanni, CEO of Fractal Analytics, spoke on the state of existing data ecosystems, challenges, and innovation. 

The AI Pe Charcha session also discussed the “Unlocking Potential of India’s Open Data,” a recently released report by INDIAai. NASSCOM jointly authors the report to drive innovation and facilitate the adoption of open government data platforms. 

Read More: NITI Aayog’s Notion of Responsible AI

The Director & Co-founder of CivicDataLabs, Shri Gaurav Godhwani, spoke about his experience with open data platforms and the challenges organizations face in sourcing, developing, and ensuring open, high-quality data access.


AI Pe Charcha has been going on since India’s first global AI Summit, organized by MeitY, in 2022, as a part of Responsible AI for Social Empowerment (RAISE). The previous AI Pe Charcha session was held on October 28, 2021, themed “AI for Data-Driven Governance.

Advertisement