Many tech companies are mulling on developing “super apps” to loosen the hold of Google and Apple on the mobile search spaces. These would include shopping, news, messaging, and web searching applications.
Microsoft is one of the companies that have recently shown interest in super apps to loosen Apple and Google’s monopoly and boost its multibillion-dollar business and Bing search engine. With a super app, Microsoft expects to grow the user base to Teams messaging and other mobile services.
Other companies like Tencent are also exploring the super apps’ domain with applications like WeChat and Grab holdings. Elon Musk, the CEO of Tesla Inc. and the owner of Twitter, has also expressed his interest in creating a super app called “X” that would bring together various services.
Especially after his recent feud with Apple overcharging abruptly high commissions on in-app purchases, Musk is more inclined to launch a super app. Musk also said, “Buying Twitter is an accelerant to creating X, the everything app.” However, the app X would be one of many attempts to do everything. In fact, Musk said in May that Tencent’s WeChat might serve as an inspiration for him.
These super apps intended to loosen the hold of Google and Apple will also act as a one-stop-shop for almost everything because of their single interface architecture.
Twitter users will get a revamped version of subscription services with the Twitter Blue relaunch but for a higher price for Apple users, charging US$8 from others and US$11 from iOS users. The platform tweeted on Sunday about the prices and that the new offering will enable users to change their handles, edit tweets, upload videos up to 1080p quality, get a blue checkmark on account verification and do much more.
we’re relaunching @TwitterBlue on Monday – subscribe on web for $8/month or on iOS for $11/month to get access to subscriber-only features, including the blue checkmark 🧵 pic.twitter.com/DvvsLoSO50
In an effort to combat impersonation, Twitter has now introduced a review process before awarding a blue checkmark to an account. The social media site will soon include grey checkmarks for the government and “multilateral accounts,” as called by Twitter, and gold checkmarks for corporations to further color-code timelines.
Twitter is not very transparent about the price difference between Android and iOS users. However, it may have something to do with discouraging people from subscribing via the App Store.
Elon Musk had not been satisfied with Apple’s policy changes and movements and has tweeted about many of these grievances, including apple charging a 30% commission from software developers over in-app purchases. Later, he claimed that Apple had threatened to ban Twitter from its app store.
Fortunately, after a subsequent meeting with Tim Cook, Apple’s CEO, Elon tweeted:
Good conversation. Among other things, we resolved the misunderstanding about Twitter potentially being removed from the App Store. Tim was clear that Apple never considered doing so.
The Securities and Exchange Commission (SEC) has asked publicly-traded companies to inform investors about any sort of involvement with struggling cryptocurrency firms.
In a notice released on Thursday, the SEC said that the companies are obligated under federal law to disclose if the turbulent crypto market has impacted their finances or operations.
The SEC’s Division of Corporation Finance, the branch that makes sure that companies disclose necessary information to investors, has issued this notice, which is supposed to help companies prepare disclosure documents.
The notice does not formally introduce new disclosure requirements. However, the recommendations are being seen as a sign of the regulator keeping a closer eye on cryptocurrency.
The SEC said that companies should discuss whether they have been exposed to crypto firms that have suspended withdrawals, filed for bankruptcy, or experienced excessive withdrawals.
It also asked companies to disclose the steps they are taking to secure the customers’ crypto assets, as well as whether the upheaval in the crypto market has caused them any reputational harm.
‘Swaasa,’ an AI-driven software, can assess if human lungs are abnormal when a user coughs thrice near the phone. Swaasa was developed by Hyderabad-based Salcit Technologies and won the Anjani Mashelkar award for innovation in November.
Salcit founder Narayana Rao Sripada’s discussion with a professor at AIIMS five years ago was the origin of this innovation. It was then he realized the need for devices to diagnose lung health in the medical sector in real-time with non-invasive technology.
It took three years until Swaasa was ready to roll out. A discussion on how there is an immediate need for such a technology that is non-invasive and can help identify a problem was the origin of Swaasa.
“If it can be attributed to lung parenchyma or lung airways, that itself can be of major value for the health worker to initiate an apt intervention,” said Narayana Rao.
This poses a challenge in rural areas, where access to medical care and diagnostics labs is not guaranteed. Swaasa eliminates all of these issues, said Manmohan Jain, Chief Operating Officer of Salcit.
The Reserve Bank of India (RBI) has shortlisted 7 companies that will use machine learning and artificial intelligence, or AI, in supervision, to enhance supervisory operations. These companies have made it to the list because of their advanced analytics and technological capabilities.
In September this year, the RBI issued a solicitation for expressions of interest (EoI) for hiring consultants to employ advanced analytics, artificial intelligence, and machine learning to produce supervisory inputs. Based on the evaluation of EoI documents, BCG India, Deloitte Touche Tohmatsu, Ernst & Young, KPMG, McKinsey and Co., Accenture, and Pricewaterhouse Coopers were the finalists.
RBI plans to upscale its existing AI capabilities in supervisory processes accrue to the Department of Supervision. The department has been dedicated to developing machine learning models and exploring data for newer attributes.
Banks, urban cooperative banks, non-bank financial businesses (NBFCs), payment banks, local area banks, credit information firms, and other Indian financial organizations come under RBI’s supervisory jurisdiction.
Most of the exploration will help assess institutions’ financial soundness, asset quality, solvency, liquidity, and operational viability.
Neural network has been serving as the backbone of nearly every notable achievement in deep learning-based AI technologies. Artificial neural networks, a type of advanced deep learning algorithm, have drawn a lot of interest for their potential use in fundamental tasks, including language processing and image recognition. However, the rapidly rising energy costs associated with ever-larger neural networks and higher processing demands are a barrier to further advancement. Optical neural networks have the ability to alleviate the energy cost and computational issues that other models suffer. These deep learning architectures operate multiple times faster and use much less energy by relying on light signals rather than electrical impulses.
The core idea of artificial neural networks is based on the computational network models present in the nervous system. Several artificial neural network approaches, such as convolutional neural networks and recurrent neural networks, employ matrix multiplications and nonlinear activations (the functions that mimic how neurons in the human brain respond). The functionality and interconnectedness of neurons can be implemented in optical neural networks by using optical and photonic devices and the nature of light propagation. While nonlinear activation functions are normally implemented by either the optoelectronic method or the all-optical method in optical neural networks, optical components are frequently employed for linear functions. This is because nonlinear optics typically calls for high-power lasers, which are challenging to implement in an optical neural network.
In optical analog circuits, its linear unit multiplies an input vector and a weight matrix. One of them is a circuit that can implement a certain class of unitary matrices with a constrained number of programmable Mach-Zehnder interferometers (MZIs) as its weight matrix. A Mach-Zehnder interferometer is a type of connected, reconfigurable, adjustable mirrors which constitutes an optical neural network. A typical MZI has two beam splitters and two mirrors. The top of an MZI receives light, which is split into two pieces that interfere with one another before being recombined by the second beam splitter and reflecting out the bottom to the following MZI in the array. Researchers can process data by performing matrix multiplication using the interference of these optical signals. The circuit does a good job of balancing the performance of the ONN with the number of programmable MZIs. As a result, optical neural networks built on a set of cascading MZIs are being considered as a potential alternative to current deep learning technology.
When compared to their electronic equivalents, optical network-based devices may provide superior energy efficiency and processing speed. One can modify each MZI’s output to facilitate the imitation of any matrix-vector multiplication by using programmable phase shifters. The programmability of ONNs depends on these phase shifters, but on the other hand, learning the MZI parameters of the circuit with a traditional automated differentiation (AD), which machine learning platforms are equipped with, takes a lot of time.
In addition, errors that might arise in each MZI soon compound as light passes from one device to the next. There are situations where it is difficult to tune a device such that all light flows out the bottom port to the next MZI due to the fundamental design of an MZI. If the array is very vast and the device loses a small amount of light at each stage, there will only be a very small amount of power remaining in the end. As a result, it is impossible to program the MZI to the cross-state perfectly. This results in component errors, which prevent programmable coherent photonic circuits from scaling.
Some errors can be avoided by anticipating them and configuring the MZIs such that subsequent devices in the array will cancel out earlier errors. Several studies have focused on “correcting” hardware errors by global optimization, self-configuration, or local correction. Even though correction decreases mistakes for standard MZI meshes by a quadratic factor, not all errors get eliminated. Error effects continue to develop with mesh size, posing a fundamental constraint to the scalability of these circuits.
Recently, a group of MIT researchers suggested two mesh architectures that accomplish the same perfect scaling: a 3-splitter MZI that corrects all hardware errors and an MZI+crossing design. Instead of the usual two-beam splitters, 3-MZI has three. The extra beam splitter combines the light, making it considerably easier for an MZI to get the necessary setting to send all light from its bottom port. The team notes that because the additional beam splitter is a passive component and only a few microns in size, it doesn’t require any more wiring and doesn’t significantly alter the size of the chip.
The researchers discovered that their 3-MZI architecture could substantially minimize the uncorrectable arbitrary error that affects accuracy when they tested it using simulations. The amount of error in the device actually decreases as the optical neural network grows larger, which is the reverse of what happens with a device using conventional MZIs. With an error that has been decreased by a factor of 20, researchers could build a device large enough for commercial usage using 3-MZIs. The MIT team demonstrated that this improved MZI mesh is >3x more resilient to hardware errors using a benchmark optical neural network, enabling effective inference in a regime where conventional interferometric circuits fail.
The MZI+crossing architecture corrects correlated errors and has the added benefit of having a larger intrinsic bandwidth, which allows the optical neural network to run three times faster. The correlated errors are caused by manufacturing flaws; for example, if a chip’s thickness is slightly off, the MZIs may all be off by around the same amount, and the faults will thus be roughly the same. To make an MZI more resilient to these kinds of faults, MIT researchers tried to modify its configuration through this design.
In addition to requiring no extra phase shifters, this design uses a lot less chip space than the ideal redundant MZIs. The proposed architecture designs also offer progressive self-configuration, enabling error correction even when the source of the hardware errors is unknown. This research will pave the way for the creation of freely scalable, broadband, and compact linear photonic circuits.
The MIT researchers intend to test these architecture techniques on actual hardware now that they have demonstrated these techniques using simulations, and they will keep working toward an optical neural network they can successfully implement in the real world.
The U.S. Air Force Office of Scientific Research and a graduate research scholarship from the National Science Foundation both contributed to the funding of this study.
The study, which was published in Nature Communications, was led by Ryan Hamerly, a senior scientist at NTT Research and a visiting scientist at the MIT Research Laboratory for Electronics (RLE) and Quantum Photonics Laboratory. The paper was co-authored by graduate student Saumil Bandyopadhyay and senior author Dirk Englund, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), the leader of the Quantum Photonics Laboratory, and a member of the RLE.
Two leading companies, LinkedIn and Microsoft India, announced the Skill for Jobs program, which provides access to 350 courses and six new career essential certificates to get six of the highest in-demand jobs in the digital economy.
Both tech companies will provide 50,000 LinkedIn learning scholarships to help people in India to enhance their skills. They will train and certify 10 million people with appropriate skills for in-demand jobs by the end of the year 2025.
The Skill for Jobs program is built on the Global Skills Initiative, which helped about 80 million job applicants access digital skilling resources worldwide. So far, Microsoft has engaged 40 million learners in Asia with the help of LinkedIn, Microsft Learn, and non-profit skilling practices. Out of these 14 million learners, 7.3 million were from India.
LinkedIn and Microsoft used LinkedIn and Burning Glass Institute data to analyze job listings to detect the programs’ six most in-demand jobs. The six jobs are Administrative Professional, Project Manager, Business Analyst, Software Developer, Data Analyst, and System Administrator. The program will offer courses and certifications in seven languages French, English, German, Portuguese, Japanese, Spanish, and Chinese.
National Technology Officer, Microsoft India, Dr. Rohini Shrivathsa, mentioned that bridging the skills gap in today’s digital economy is fundamental to India’s employment challenges and societal progress. Microsoft has constantly been investing in various initiatives to skill India’s youth, bring out the potential of underserved communities and provide opportunities to empower women into the workforce.
Most people visualize a pair of perpendicular lines describing the relation between two entities using a line, a curve, or bars of various heights when they hear the term “graph.” However, in data science and machine learning, a graph represents a two-part data structure: Vertices and Edges, or G = (V, E). Here, V denotes the collection of vertices, while E denotes the edges connecting the vertices. Today, graphs are being used to represent and examine relationships between data, including those relating to social networking, finances, transportation, energy grids, learning molecular fingerprints, predicting protein interface, and disease classification. Graph neural network is a subfield of machine learning that focuses on creating neural networks for graph data as efficiently as possible. They help data scientists work with data in non-tabular formats.
Building mini-batches in graph neural networks is extremely computationally expensive due to the exponential growth of multi-hop graph neighbors over network layers, in contrast to regular neural networks. This makes it difficult to enhance the training and inference performance of graph neural networks. In order to solve these problems, MIT researchers worked with IBM Research to create a novel approach dubbed SALIENT (SAmpling, sLIcing, and data movemeNT). Their technique drastically reduces the runtime of graph neural networks on large datasets, such as those with 100 million nodes and 1 billion edges, by addressing three primary bottlenecks. The newly developed method also scales well when the computational capacity is increased by one to sixteen graphical processing units.
The research was presented at the Fifth Conference on Machine Learning and Systems. It was supported by the U.S. Air Force Research Laboratory and the U.S. Air Force Artificial Intelligence Accelerator, as well as by the MIT-IBM Watson AI Lab.
The need for SALIENT became even more evident when researchers started investigating at the challenges that current systems faced when scaling cutting-edge machine learning algorithms for graphs to enormous datasets, practically at the order of billions. The majority of current technology achieves adequate performance on smaller datasets that can easily fit into GPU memory.
According to co-author Jie Chen, a senior research scientist at IBM Research and the manager of the MIT-IBM Watson AI Lab, large datasets refer to scales like the whole Bitcoin network, where certain patterns and data links might indicate trends or criminal activity. Of the blockchain, there are around one billion Bitcoin transactions. If researchers wish to spot illegal activity inside such a vast network, they will have to deal with a graph similar to this size. The team’s main objective is to build a system that can manage graphs that may be used to represent the whole Bitcoin network. To keep up with the rate at which new data is created almost daily, they also want the system to be as efficient and streamlined as possible.
The team worked on building SALIENT with a systems-oriented approach that included basic optimization techniques for components that fit into pre-existing machine-learning frameworks, such as PyTorch Geometric (a popular machine-learning library for GNNs) and the deep graph library (DGL), which are interfaces for creating a machine-learning model. The key objective of developing a technique that could easily be included in current GNN architectures was to make it intuitive for domain experts to apply this work to their specialized domains in order to speed up model training and pluck out insights during inference faster. The team modified its architecture by continually using all available hardware, such as GPUs, data lines, and CPUs. For instance, while the CPU samples the graph and prepares mini-batches of data to be sent across the data link, GPU would either train the machine-learning model or perform inference.
The researchers started by examining the performance of PyTorch Geometric, which revealed an astonishingly low usage of the available GPU resources. The researchers increased GPU usage from 10 to 30% by using minor modifications, which led to a 1.4 to 2 times performance increase compared to open-source benchmark codes. With this fast baseline code, the algorithm could run through one full pass (or “epoch”) on a large training dataset in 50.4 seconds. The MIT research team set out to examine the bottlenecks that develop at the beginning of the data pipeline as well as the algorithms for graph sampling and mini-batch preparation since they felt they might get even improved results.
In contrast to conventional neural networks, GNNs carry out a neighborhood aggregation operation, which calculates information about a node using input from other neighboring nodes in the graph, such as information from friends of friends of a user in a social network graph. The number of nodes a GNN must connect to also increases with the number of layers, which might occasionally strain a computer’s capabilities. Although some neighborhood sampling methods employ randomization to boost efficiency marginally, this is insufficient because the methods were developed when contemporary GPUs were still in their infancy.
To overcome this, the team came up with a combination of data structures and algorithmic improvements that increased sampling performance. As a result, the sampling operation alone was enhanced by approximately three times, decreasing the runtime per epoch from 50.4 to 34.6 seconds. The team adds that they uncovered a previously overlooked fact: sampling can be done during inference at an appropriate rate, boosting total energy efficiency and performance.
According to MIT, earlier systems used a multi-process strategy for this sampling stage, resulting in additional data and pointless data transfer between the processes. By building a single process with small threads that retained the data on the CPU in shared memory, the researchers improved the dexterity of their SALIENT technology. The researchers also point out that SALIENT utilizes the shared memory of the CPU core cache to parallelize feature slicing, which collects relevant data from nodes of interest and their immediate surroundings and edges. As a result, the overall runtime per epoch dropped from 34.6 to 27.8 seconds.
The last bottleneck included a prefetching step to streamline the exchange of mini-batch data between the CPU and GPU, which helps prep data right before it’s needed. The researchers predicted that doing so would use all of the available bandwidth on the data link and bring the approach up to 100% utilization, but they only witnessed about 90%. They also discovered and solved a performance issue in a well-known PyTorch library that resulted in redundant round-trip CPU and GPU connections, giving SALIENT a runtime of 16.5 seconds per epoch.
The team believes that they were able to achieve such outstanding outcomes because of their painstaking attention to detail. The team hopes to apply the graph neural network training system to the existing algorithms used to categorize or forecast the features of each node in the future, as well as focus on identifying deeper subgraph patterns. The latter can benefit in identifying financial crimes.
People often rush to social networking platforms like Facebook, Instagram, and Twitter and other online platforms like LinkedIn, YouTube to share trivial and important events from their lives. Whether it is mundane or exciting, there is an innate urge to talk about current affairs and vapid details about everything. Be it ranting about how they feel about certain things – stuck in a traffic jam, poor customer service, festival season, etc. – people have the compulsion to communicate their thoughts to the world via social media. While this seems pointless, these ‘thoughts’ can be data points for the government to understand its citizens via sentiment analysis.
Sentiment analysis is the practice of using Natural Language Processing (NLP), a subset of machine learning, to recognize sentiments and emotions that are communicated in human languages. NLP enables machine learning-powered systems to recognize and evaluate the thoughts and opinions expressed in a post or any piece of content uploaded on the internet. First, NLP recognizes human language and transforms it into machine language. After the natural language input has been converted into machine-understandable text, natural language processing algorithms scan it to look for trends, anticipate future events, or discern the precise message the user intended to communicate. Leveraging this technology in the business allows organizations to pinpoint if a customer has a favorable or negative opinion of their good or service and take the appropriate action to resolve it.
Businesses get insight and understand exactly what is expected of them so they can respond in real time by analyzing the thousands of comments and opinions made on social networking sites, online surveys, reviews, and sometimes even videos. This has been a driving factor for reliance on sentiment analysis by brands looking to be thoughtful regarding what they are talking about, how they are marketing themselves, and how they are accountable for their actions while safely inclining toward our interests.
Sentiment analysis is used in various fields, including business and marketing, journalism, and others. It has a wide range of applications that can help decision-making across many domains. Even government organizations can employ sentiment analysis to gain better insights into the opinions of their citizens. They can use it during elections, public action or even manage and monitor a natural disaster.
As was noted in the introduction, social networking sites like Twitter, LinkedIn, and others are some of the most popular avenues for people to express their thoughts on a broad range of topics. Microblogging sites like Twitter can be an excellent breeding ground for sentiment analysis owing to the accessibility, visibility, and abundance of Twitter content. Twitter allows people to express themselves by posting and interacting with brief tweets. Every day, millions of tweets are posted on virtually every subject. This makes Twitter a resourceful platform for sentiment analysis. Besides, Twitter is the only platform that still caters to Genz, millennials, and boomers alike.
Meanwhile, platforms like LinkedIn can be viewed as collections of hyperactive communities engaging about issues in a detailed manner. Here, the narratives about any topic are directly controlled by an ‘influencer’ or ‘social celebrity,’ who exerts such influence because of their professional career, large followers, or both. In comparison to Twitter, Linkedin has quite a formal tone.
In smart cities and other cutting-edge urban jungles where microblogging platforms like Twitter and apps like LinkedIn are increasingly widely utilized, the government can employ NLP on such platforms to clearly understand the issues and grievances of their population as they arise via sentiment analysis.
Every government aspires to improve the effectiveness of its public services and take the initiative in order to serve its population better. Sentiment analysis can assist them in comprehending the primary problems that citizens encounter, such as traffic congestion and inconsistent treatment at the government offices and federal agencies like the police. The insights from such analysis can assist them in developing new policies to address those concerns, and grievances and get real-time reactions from people to them. The latter will aid government employees in comprehending how public opinion has been impacted by government initiatives and policies.
Ruling political parties can plan new policies or election campaigns based on the causes of people’s angst or happiness and the associated patterns. Even opposition parties can identify trends in people’s resistance or support for new legislation and develop agendas for their own parties through sentiment analysis. Sentiment analysis enables both sides of government to remain aware of the public opinion about their parties, their actions, and their statements. One incorrect comment may affect public opinion negatively on social media, which can lead to fatal reputation damage during an election season.
It is possible that government may hesitate from leveraging sentiment analysis due to privacy violations. However, that can be addressed by having social media user data protection laws in place that can check how the government is using the data, data storage, data selling and other privacy concerns.
The Ministry of Human resources and Emiratisation (MoHRE) develops an AI-enabled automated system to complete employment contracts without the intervention of humans.
The new AI system is a part of the National Artificial Intelligence Strategy that focuses on setting the UAE as a global leader in artificial intelligence by building an integrated framework in different areas of the UAE.
As per the Ministry, 35000 employment contracts have been completed in the first two days of the new AI system’s launch. These contracts consist of new and renewed employment contracts that were approved after the signatures of both parties. The new system uses advanced technologies to process and verify images in the contracts, reducing the time from two days to just 30 minutes.
Besides AI-enabled systems, UAE has also acquired many technologies like the awareness program via the self-guidance service, the smart mobile app, the smart communication framework, the Nafis platform, and more. Users can access the awareness program via the Ministry’s smart mobile app, which provides over 100 ministry services powered by big data and AI. The WhatsApp channel of the Ministry is the first federal entity to have a verified business account on the Meta-owned app. The channel is available in both English and Arabic languages.