GitHub’s AI-powered Copilot was made available in June this year. GitHub investigated and quantified the impact on developer productivity, finding that 87% of developers found Copilot preserved their mental energy. 74% experienced a rise in overall daily workflow satisfaction.
GitHub Copilot is the artificial intelligence-powered pair programmer that makes helpful suggestions by detecting patterns and prompting with suggestions to finish the code based on history and context. It first went into the technical preview in June 2021 before going live for all developers 12 months later.
The research covers the time from June 2021. Along with the two findings above, GitHub research also found that 73% of developers said it helped them stay in the flow. Dr. Eirini Kalliamvakou conducted the research. More than 50% of the respondents are professional developers.
GitHub also divided a group of 95 developers into two groups to determine the speed of completing a task with and without Copilot. The findings were that those using Copilot completed the task 55% faster than those without, and that is for those who completed the task at all. Those using Copilot had a higher percentage of completing the task, at 78% vs. 70% without.
GitHub had launched Copilot, believing it would improve developer productivity, aid efficiency, and reduce decision fatigue. The research proves the giant developer hub and software repository to be correct, with developers reporting that Copilot helped them stay focused on the task at hand, make meaningful progress, and feel good at the end of a day’s work. Developers reported a significant difference in satisfaction and productivity.
GitHub’s research used the SPACE framework to measure Satisfaction and well-being, Performance, Activity, Communication and collaboration, and Efficiency and flow (hence S-P-A-C-E) and says more insights are still to be released.
A figure of an eccentric woman keeps appearing in images generated by an artificial intelligence tool. Supercomposite, the artist, has dubbed the woman Loab.
The artist first discovered Loab through a technique called ‘negative prompt weights,’ in which a user can get the AI to generate the opposite of whatever they are typing in the prompt.
With DALL-E and other image generators supported by AI, users are pushing the limits of what such technology can do through prompts. These image generators are trained by using an endless supply of images on the world wide web.
Naturally, the images produced by these artificial intelligence programs can be unpredictable and shocking. The latest buzz in the world of artificial intelligence is coming from an artist called Supercomposite. The artist shared images of a woman who appears in images when specific prompts are registered.
The artist used this method for the word ‘Brando.’ The tool generated the image of a logo with a skyline and the words’ DIGITAPNTICS.’ The strange woman appeared when the technique was used on the words in the logo.
Unfortunately, it is hard to understand what may be causing this woman’s figure to appear in the images. For starters, AI models are trained by using billions of images and are too complex to understand how they reached a particular result.
Web 3.0, Cryptocurrency, Metaverse, NFT, and Blockchain are future-centric technology-based courses. Such educational programs are being developed in India by TimesPro in collaboration with leading industry partners and the Indian Institutes of Technology (IITs).
Students who complete these new-age technology-based courses will be issued certificates by institutes like IIT-Delhi, IIT-Roorkee, and IIT-Ropar. Even the IIT faculties are involved in designing many of these technology-based courses.
These programs will focus on learner-centricity and high engagement through case studies, projects, assignments, hackathons, and capstone projects.
In addition to core learning, career services such as expert sessions, resume preparation, and one-on-one mentoring with industry experts will be a part of the curriculum. The programs will be delivered through TimesPro’s state-of-the-art learning management system.
A Web 3.0 Centre of Excellence is also being launched as a part of the initiative to develop a robust virtual ecosystem of industry partners, resources, and a network of global SMEs to build a strong learner community.
These programs will develop a deep understanding of Web 3.0 technologies for learners that are changing how the world interacts online. From the origins of Blockchain technology and cryptocurrencies, its applications, and real-world use cases to a holistic understanding of the programming language used to smart code contracts in many Blockchain projects.
Advanced topics such as Web 3.0, Decentralized Finance (DeFi) and Decentralized Autonomous Organizations (DAOs), non-fungible tokens (NFTs), and the Metaverse-the courses will act as stepping stones for learners to stay ahead of the curve.
The program launched Thursday is the first such program to be taught at a university in Jordan that involves AI and robotics. Its launch comes as Jordan gears up to transform itself in the digital era. The government of Jordan linked up all public offices electronically, and many services are now being done online.
According to Nasser Al-Hunaiti, the dean of the engineering faculty, the UJ program is consistent with the general framework of the master’s programs. He said anyone having a Bachelor’s degree in any engineering field could enroll.
DeCAIR has allocated 95,000 Euros to the project. It has established an AI and machine learning laboratory, purchasing electronic tablets, a video and picture conference system, and robotics lab, according to Abandah.
The program’s curriculum includes five mandatory courses, viz. Applied Machine Learning, Industrial Robotics, Applied Research Approach, Robotics Systems, and Machine Vision, in addition to eight different optional courses. Dr. Musa Al-Yaman, a professor from mechatronics engineering department, is responsible for developing curricula for Bachelor’s and Master’s programs at the University of Jordan.
Qure.ai, a leading health tech company, and Erasmus MC, University of Rotterdam, has launched an AI Innovation Centre for medical imaging. Initially planned for 3 years, the center will conduct comprehensive research in detecting abnormalities using artificial intelligence algorithms.
The center will aim to understand more potential use cases for AI in medical sciences across Europe and guide physicians in adopting such technologies. Erasmus MC is a recognized innovator in healthcare excellence. It will run the lab and carry forward the musculoskeletal and chest imaging research using Qure.ai’s AI technology.
Dr. Jacob J. Visser, Radiologist & Chief Medical Information Officer at Erasmus, said, “In Qure.ai’s work to date, it is clear that they gathered detailed insights into the effectiveness of AI in healthcare settings, and together we will be able to assess effective use cases in European clinical environments.”
Adopting AI has the potential to alleviate healthcare clinicians’ burden of constrained resources and will also provide early warning systems. With Qure.ai’s technologies, the benefits of automated detection of anomalies and progressive detection are already visible.
Over the duration of 3 years, many radiologists and Ph.D. students will work together to generate evidence and publish it in peer-reviewed journals. The focus will be on chest imaging, including X-rays and brain CTs.
Loram, the vehicle fleet company, plans to utilize Nauto’s AI technology to reduce the ill effects of distracted driving in Loram’s fleet of truck drivers. The company will use Nauto’s devices with computer vision and deep learning capabilities to alert unfocused drivers, especially truck drivers.
As per a survey conducted by a truck fleet software company Insight, 4.5 accidents over a year are credited to a single truck driver. With over 14,000 miles covered by Loram truck drivers in a day, there is a significant risk. After exploring several technologies, Grahan Rose, vehicle fleet manager at Loram, decided to proceed with Nauto’s AI and deep learning.
Nauto was founded in 2015 with capabilities in facial recognition and managing distracted driving. Nauto installs a tiny camera device equipped with computer vision technology to capture what is going on the road and what the driver does.
The camera does not record everything but the 30 seconds around and a probable collision or driving violation. The device also computes the probability of a collision if the drivers are distracted and alert them when the probability is more than 30 percent. If it notices an incoming collision, it rings an alarm.
Loram has been involved with Nauto since 2018 for overall road traffic management and monitoring drivers’ behavior. Rose said, “It’s significantly decreased our tailgating, our cellphone usage. Our guys are going hands-free and not picking up their cellphones and getting distracted when they’re driving.”
Australia-based AICRAFT and Bengaluru-based aerospace firm Valdel Advanced Technologies signed an MoU on machine learning and artificial intelligence. The memorandum was inked in the presence of the Indian Space Research Organisation (ISRO) on the concluding day of the Bengaluru Space Expo.
The memorandum aims to help Valdel in simulation and manufacturing using AICRAFT’s advanced AI capabilities. Valdel’s services will be used to test AICRAFT’s platforms and systems.
Sarah Kirlew, Consul General for South India, said, “We are pleased that Australian industry has continued to explore tangible space collaboration with India over the past few days.” The statement was delivered in tandem with six other memorandums signed by several Australian and Indian companies, two of which were based in Bengaluru alone.
Additionally, an Australian aerospace company, Space Machines, opened a new R&D center in Bengaluru. The company will collaborate with Ananth Technologies on product development, testing, integration, and joint-space missions. Another Australian company, HEX20, will collaborate with Skyroot Aerospace, a Hyderabad-based company, to launch spacecraft and avionics.
The BSE saw many more collaborations between companies like GalaxEye, QL Space, SatSure, Australia’s SABRN Health, Altdata, etc.
RunwayML introduced a new text-to-video feature on its AI-backed web video editor. The new update can edit videos with written commands called “prompts.” Runway posted a video demonstration reel to show how a simple text input can alter a video. You write commands like “remove the object,” “make it more cinematic,” etc., in an input text box, and it gets done.
The promotional video also shows text-to-image generation similar to Stable Diffusion, text overlay, character masking, and much more.
Text-to-video technologies are yet in a primitive state because of high computational requirements and dataset scarcity for training with metadata. Runway’s text2video is not the first of its kind, there have been some other successful attempts like CogVideo. CogVideo can generate videos, however, with low resolution.
With every other attempt, it will be reasonable to expect enhanced quality in text2video technologies; hence, Runway’s teaser is a step forward in synthetic video generation.
Until now, Runway has been accessible as a web-based commercial video editing product. It is not entirely free and charges a monthly fee, including charges for cloud storage. Currently, the new text-to-video feature is in a controlled phase, with “Early Access” only available to a few people. You can sign up for the waitlist on Runway.
MIT researchers and the MIT-born startup DynamoFL have created the FedLTN, a federated learning-based system. FedLTN is based on the lottery ticket hypothesis, which is a machine learning concept. The hypothesis postulates that considerably smaller subnetworks within incredibly large neural network models exist which can function on a similar level. The researchers explain that finding one of these subnetworks is equivalent to finding a winning lottery ticket. Therefore the ‘LTN’ in FedLTN stands for ‘lottery ticket network.’
The advent of powerful computer processors and the availability of abundant data for neural network training have led to the enormous advancement of machine learning-based technologies. Machine learning models typically perform better when trained on a wider variety of data, encouraging businesses and organizations to gather as much information as possible from their consumers. This includes data from sensors in user devices, GPS, cameras, CCTV surveillance, wearables, smartphones, and EHRs. However, from a privacy standpoint, user-generated data is typically susceptible, including location data, private medical records, and social interactions, etc. There is a possibility of major privacy infringement risk if this sensitive data is compiled on a centralized server.
In addition to privacy concerns, relaying data to a central server for training would result in these problems like higher network expenses, management and business compliance costs, and potentially regulatory and legal complexities. Moreover, with increasing network congestion, it is likely to be challenging to request that all training data be sent to the remote server, thereby inhibiting the adoption of centralized machine learning on user devices powered by wired and wireless telecommunication.
The need for privacy-preserving machine learning is growing as the general public and lawmakers become more aware of the data revolution. In light of this, research on privacy-respecting methods like homomorphic encryption, secure multiparty computing, and federated learning is becoming more and more prominent. For the time being, we’ll concentrate in this post on how federated learning makes privacy feasible.
Federated learning, sometimes referred to as collaborative learning, enables the mass training of models using data that is still dispersed across the devices where it was initially created. In this way, millions of people train their models on their devices using local datasets. Then users communicate insights like model parameter updates of the local model to a central server. The server’s responsibility includes combining all participating clients’ weights into a new model version. The users are subsequently given a new copy of the modified global model to start the subsequent federated training cycle. This approach is continued until the model achieves convergence. Since the centralized training orchestrator only sees each user’s contribution through model updates, the sensitive data stays with the owners of the data, where the initial training is carried out.
Despite its objective to improve user privacy and reduce communication costs by sharing the updated parameters, federated learning faces three significant bottlenecks. For instance, the data quality given by various end-user participants in federated learning might vary considerably. The capacity of various terminal devices or individuals to provide training data may vary, and there may be unforeseen random mistakes during data collecting and storage. Since each user collects their own data, such data do not necessarily follow the same statistical patterns, thereby affecting the performance of the combined global model. Therefore, data quality must be considered as one of the participants’ privacy concerns to ensure that the learning process is impartial and free from discrimination.
Additionally, the combined model is created by averaging the results, implying it is not customized for each individual. Further, transferring the local model parameters to the central server, and copies of the updated global model back to local devices, requires transporting a lot of data at high connection costs.
The three issues with federated learning can all be solved at once thanks to a solution devised by MIT researchers. Their solution reduces the size of the combined machine-learning model while increasing accuracy, and expediting user-to-central server communication. Also, it guarantees that each user obtains a model better tailored to their surroundings, improving performance. Compared to alternative methods, the team reduced the model size by nearly an order of magnitude, resulting in communication costs for individual users that were four to six times cheaper. Their solution also managed to boost the model’s overall accuracy by approximately 10%.
The researchers used an iterative pruning technique to implement the lottery ticket hypothesis. The researchers examined the leaner neural network to check if the accuracy remained over the threshold after removing nodes and connections between them if the model’s accuracy was above a certain threshold.
This pruning methodology for federated learning has been utilized in previous methods to reduce the size of machine learning models so that they might be shared more effectively. Though these methods ramped up processes, model performance deteriorated. Hence, the researchers used a few cutting-edge methods to speed up the pruning procedure while improving the precision and personalization of the new, smaller models.
By skipping the stage when the remnant parts of the pruned neural network are “rewound” to their initial values, they were able to accelerate the pruning process. In addition, the model was trained before being pruned, which enhanced its accuracy and enabled faster pruning.
Researchers were cautious to avoid removing layers from the network that gather crucial statistical data about that user’s particular data in order to make each model more customized for the user’s surroundings. Additionally, each time a model was integrated, data from the central server was accessed, saving time and preventing the need for repeated communication rounds.
Once researchers tested FedLTN in simulations, they found that it improved performance and cut communication costs across the board. In one experiment, a model created using a conventional federated learning method was 45 megabytes in size; however, the model created using their technology was just 5 megabytes and had the same accuracy. Another test compared FedLTN’s performance to a state-of-the-art approach, which needed 12,000 megabytes of communication between users and the server to train a single model, compared to FedLTN’s 4,500 megabytes.
Even the worst-performing users with FedLTN had a performance improvement of more than 10%. And according to Vaikkunth Mugunthan, Ph.D. ’22, lead author of this research paper, the total model accuracy outperformed the state-of-the-art personalization algorithm by over 10%.
After creating and perfecting FedLTN, Mugunthan is currently attempting to incorporate the method into DynamoFL. He wants to improve this solution in the future, especially by utilizing the same techniques on unlabeled data. Mugunthan hopes this research encourages other academics to reconsider how they approach federated learning.
Mugunthan collaborated on the paper with his adviser, senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
The Google team along with Alex Wiltschko and Richard Gerkin have accomplished a remarkable feat by mapping the scent of molecules and digitizing the sense of smell, . By doing so, the team has opened up the possibility of discovering new odors and molecules.
The researchers validated their model against new molecules, linked their findings to biological odor mechanisms, and expanded their results to find new approaches to global health challenges. This transformational research’s real-world applications are beyond exciting.
The names of the smells that various molecules are said to evoke, such as meaty, floral, or mint, were combined with thousands of examples of those molecules in a graph neural network (GNN) model that Google AI created in 2019. This approach was necessary to study the correlation between a molecule’s structure and its likelihood of having a specific odor label.
The page object model (POM) displayed pairs of odors perceived similarly as close-by points with a similar hue. The Google AI researchers show how we can use the map to understand these properties in introductory biology, predict the future odor properties of molecules, and address urgent global health issues. Several tests have already run on the map.
Test 1: Testing with molecules that did not correlate with odors
The researchers tested the fundamental model to see if it could accurately predict the smells of novel molecules they didn’t include in its development.
Test 2: Linking odor quality to the fundamental biology
The researchers tested the odor map to see if it could predict animal odor perception and the underlying brain activity. They discovered that it accurately predicted the behavior of most of the animals studied by neurologists, including mice and insects, and the activity of sensory receptors, neurons, and synapses.
Test 3: To address the global health problem
The odor map opens up new possibilities because it closely relates to animal perception and biology. They chose to retrain the POM to manage one of humanity’s most significant issues: the spread of diseases carried by ticks and mosquitoes. In general, the POM can be used to predict animal olfaction.