Home Blog Page 154

DataRobot Achieves Google Cloud Ready – BigQuery Designation

datarobot google cloud ready designation

DataRobot, the AI cloud leader, achieves Google Cloud Ready – BigQuery designation and plans to strengthen its partnership with Google Cloud to support joint customers. By receiving this designation, the DataRobot AI Cloud platform satisfies a basic set of BigQuery-related functional and interoperability standards. 

DataRobot AI Cloud is a multi-cloud platform that can be used at the edge, in public clouds, and in data centers. A global community of strategic technology, solution, and consulting partners powers the cloud.

The DataRobot AI Cloud availability on the Google Cloud Marketplace was announced in June 2022. Now, with the cloud-ready designation, both companies will work together to give data analytics teams faster access to all of the company’s data and the ability to submit model predictions back into BigQuery.

Read More: Google Cloud Unveils a New AI-powered Medical Imaging Suite

A partner integration validation program called Google Cloud Ready – BigQuery aims to boost customer trust in partner integrations into BigQuery. According to this program, Google Cloud engineering teams validate partner integrations into BigQuery in three stages:

  1. Run a series of data integration tests and compare results against benchmarks.
  2. Collaborate closely with partners to fill gaps.
  3. Improve documentation for shared customers.

This certification from Google Cloud gives our customers more assurance that DataRobot AI Cloud integrates with BigQuery without any issues to produce even more intelligent business solutions.

Sirisha Kadamalakalva, SVP of Alliances and Partnerships, DataRobot, said, “This integration with BigQuery enables data analytics teams and data scientists to quickly enrich datasets and build powerful machine learning models.” She also added that the company is looking forward to delivering more value to the joint customers.

Advertisement

NASSCOM will release an AI Resource Kit for Responsible AI

nasscom will release an AI resource kit

NASSCOM, a tech trade, and commerce industry leader will release an AI resource kit and a responsible AI hub in collaboration with Fractal Analytics, Microsoft, Deloitte, TCS, and IBM Research to ensure the responsible adoption of AI at scale.

The resource kit, according to NASSCOM, will enable businesses to independently evaluate and track the development and implementation of AI solutions. It will also ensure ethical compliance and suggest management structures for risk mitigation.

Debjani Ghosh, President of NASSCOM, said businesses are subjected to the unconscious biases their owners inherit, and acknowledging them is the biggest challenge. She continued to say that technology and artificial intelligence are becoming the key driver to ensuring that this biasedness does not hamper business efficiency.

Read More: Kaggle ML and DS Survey 2022: Key Insights

She said, “Consumer awareness around AI is increasing, however trust in its adoption is in a very nascent stage. Scaling AI has now become a key business imperative. It is, therefore, vital for us to figure out how to scale AI ethically and responsibly.”

NASSCOM intends to maintain the kit as an evolving reference, gradually building on the most recent research and best practices for responsible AI adoption. Stakeholders from the industry, government, academia, think tanks, and civil society organizations preserve and enhance the kit’s utility over time.

Advertisement

Apple introduces Ask Apple for developers to connect with experts

Apple introduces Ask Apple for developers

Apple has introduced Ask Apple, which is a new series of interactive one-on-one consultations and Q&As aimed at providing developers with more opportunities to connect directly with Apple experts for feedback, insight, and support.

Developers joining Ask Apple can inquire about several topics, such as testing the latest seeds, adopting new features like Dynamic Island, and implementing updated frameworks from Worldwide Developers Conference (WWDC). It can also provide insights on moving to Swift, SwiftUI, and preparing their apps for new hardware and OS releases. 

Ask Apple is free, and registrations are open for all the members of Apple Developer Program and Apple Developer Enterprise Program. This series will allow developers to ask several Apple team members questions during one-on-one office hours and through Q&As on Slack. Q&As allow developers to connect with Apple designers, evangelists, and engineers to get their queries answered, engage with other developers worldwide, and share their learnings. 

Read More: Apple’s Privacy Changes Break The Facebook-Google Advertising Monopoly In The Online Search Market

Office hours focus on creating and distributing compelling apps that take advantage of the latest technology and design. Developers can ask for code-level assistance, design guidance, input on implementing technologies and frameworks, advice on resolving issues or help with App Review Guidelines and distribution tools. 

Ask Apple is built on successful programs like Tech Talks and Meet with App Store Experts, which have offered developers more than 200 live presentations and thousands of office hours over the past year.

Advertisement

Kaggle ML and DS Survey 2022: Key Insights

kaggle ml and ds survey 2022

Kaggle, the world’s largest data science community, released the Kaggle ML and DS Survey findings for 2022. Here are some insights from the survey:

Demographic characteristics

The survey shows that India exhibited a strictly increasing trend in the number of data scientists working and residing there during the last five years. Japan was amongst other countries that have shown a rising trend, while countries like the US have shown near-stagnant growth with a hike in the number of data scientists during 2022.

Programming skills and coding infrastructures

As per the survey, Python and SQL remained the most prominent programming languages a data scientist must know. Python outstands SQL by a significant margin and has surpassed R programming and other programming languages like C++, Java, or Javascript, which do not necessarily aid people in excelling in the field.

JupyterLab remains the most widely used source-coding notebook environment, followed by Google Colab and Kaggle notebooks, replacing the traditional R Studio and MATLAB. The survey also reveals that many data scientists have actively shifted to VS Code for software development. 

Machine Learning Framework

Scikit-Learn stands out as the most popular framework, followed by TensorFlow and XGBoost. While they have been on the top of data scientists’ lists, they exhibited a near-constant utility, while PyTorch has been growing steadily.

The findings include concrete numbers on the number of people working with data, trends in machine learning across industries, and the best approaches for aspiring data scientists to enter the profession. It is an intriguing example of a survey dataset because Kaggle provided all the data, not just the aggregated survey results, allowing analysts to study the data independently.

Kaggle ML and DS Survey Competition 2022

Kaggle announced a competition following the sixth annual industry-wide survey to surface a comprehensive view of the country’s machine learning and data science state. 

It is initiating the annual Data Science Survey Challenge and will award US$30,000 in prizes to notebook authors who best describe a particular segment of the data science and machine learning community. The challenge is an opportunity for people to use their imagination and create a story of a group of people with whom they identify.

Read More: AWS Open-Sourced its EC2 Trn1 Instances Powered by AWS-Designed Trainium Chips

The submissions will be evaluated on the following:

  • Composition: the narrative and the subject should be well put together, researched, and supported by data and visualizations. 
  • Documentation: the code and notebooks should be understandable to an ordinary reader, with adequately cited sources and a concise analysis of each step. The documentation should represent the rationale behind your story.
  • Originality: the entry should be informative, thought-provoking, and non-plagiarized.

A submission must be contained in a single notebook and made public before the submission deadline to be considered valid. In addition to the Kaggle Data Science survey, participants are welcome to utilize any other datasets. For a submission to be accepted, it must be made accessible to the general public on Kaggle by the deadline.

Advertisement

Humanoid robot Ai-Da becomes first robot to speak in the House of Lords

Ai-Da speaks in House of Lords

Humanoid painter Ai-Da made history by being the first robot to speak in the House of Lords. The life-size AI robot artist was quizzed by peers on her artwork at the Communications and Digital Committee, examining the future of the UK’s creative industries and how AI could impact the sector.

Ai-Da is a humanoid robot with artistic abilities designed to look like a human female. The robot uses artificial intelligence to create art and respond to questions. 

Ai-Da answered questions directly from peers during the session. However, creator Aidan Meller confirmed the questions had been submitted previously to ensure better quality answers from the artificial intelligence language model used to power the responses.

Read More: Elon Musk Unveils Optimus Prototype At Tesla AI Day 2022

When crossbench peer Baroness Bull asked how she produces art, Ai-Da replied that she produces her paintings using cameras in her eyes, AI algorithms, and AI robotic arm to paint on canvas, which results in visually appealing images.

Ai-Da said she also created poetry by analyzing text and identifying poetic structures, adding that without human consciousness, she depended on computer programs and algorithms. However, as a sign of technology’s limitations, the committee was delayed by several minutes as Ai-Da temporarily shut down and had to be rebooted.

Once back on, Ai-Da told Liberal Democrat peer Baroness Featherstone that the role of technology in creating art would continue to grow as artists find new ways to use technology to express, reflect, and explore the relationship between culture, science, and technology. 

Advertisement

New York City Proposes New Rules on Employers’ Use of Automated Employment Decision Tools

New York City Proposes Rules on automated employment decision tools

New York City established a new law last year requiring firms doing business in the city to do annual audits of their AI tools to ensure that biases do not manifest in the technologies. Employers in New York City who use “automated employment decision tools” in their hiring and promotion procedures are required to create a “bias audit” of those tools in accordance with Local Law International No. 1894-A (also known as Local Law 144). For each transgression, fines will range from US$500 to US$1,500.

The law, which will come into force on January 1, 2023, forbids employers and employment agencies from using automated employment decision tools unless some bias audit and notice criteria are satisfied, including carrying out bias audits and providing notifications to applicants and employees. The proposed regulations clarify many key terms, the prerequisites for a bias audit, the obligations for publishing the audit results, and the notices that must be given to staff and job hopefuls.

In order to give the AI Law trustworthy direction and clarity before it goes into force, New York City has now released new guidelines. The New York City Department of Consumer and Worker Protection (DCWP) presented proposed new rules to execute the city’s AEDT AI law (Local Law 144) on September 23, 2022. 

In other words, the newly proposed rules fill critical loopholes in Local Law 144, which was implemented in December 2021. With a public hearing scheduled for October 24th, the comment period is already open. The DCWP website allows interested parties to submit written comments or request an oral deposition. The public hearing will also allow for comments to be made, but those who desire to do so must register in advance by contacting (212) 436-0396. Speakers will have three minutes to share their thoughts.

The AI Law explains AEDTs as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, and that is used to significantly assist or replace discretionary decision making for making employment decisions that impact natural persons” in the AI law itself. According to the proposed rules, “to significantly assist or replace discretionary decision making” implies that the covered tool (a) depends solely on a simplified output,  (b) employs a simplified output as one set of criteria where the output is weighted greater than any other criterion in the set, or (c) makes use of a simplified output to override or modify conclusions derived from other variables, including human decision-making.”

Read More: Iranian Government Admits using Facial Recognition to Identify Women Violating Hijab Rule

The Proposed Rules, appear to make it clear that an employer can comply with the AI Law’s standards by using a bias audit that was commissioned by a vendor and was based on historical vendor data. There seems to be no requirement for employers to undertake their own independent analysis of a tool’s effects on their specific candidate pool. However, as with many AEDT organizations, the Proposed Rules do not consider a situation where the vendor simultaneously acts as the developer.

According to the proposed rules, a “candidate for employment” is a person who has submitted an application for employment for a specific position by providing the appropriate information and/or materials in the manner requested by the employer or employment agency.  The proposed rules make it clear that, as a result, the new law will not apply to prospective candidates who have not yet submitted an application for a post.

To conduct an audit, here are key takeaways from the proposed rules:

  1. If an AEDT is solely in charge of making an employment decision, if its decision is weighted more heavily than other factors, or if it supersedes a decision made by another process, including a human, it is subject to the law’s jurisdiction;
  1. Based on the EEOC framework, audits must offer data that allows for disparate impact analysis. This entails calculating the “selection” rate for each protected gender, race, and ethnicity group and contrasting it with the rate for the group that is most frequently selected;
  1. The term “independent” auditor refers to someone who was not engaged in “using or designing” the relevant AEDT.

Regarding the need to notify job applicants or current workers before using the AEDT, the rules state that authorized companies must do so 10 working days in advance as follows:

  1. For potential employees who live in the city, by publishing notification of the AEDT on the corporate website’s career or employment section, in the job announcement itself, or by sending the applicant an email or US mail directly;
  1. For present employees who live in the city, by incorporating information about the AEDT in a policy or procedure that is given to employees, by mentioning it in a job ad, or by distributing it through email or US mail.

Both candidates and employees must be given details on how to request an alternate selection method or accommodation in the notification. They do not, however, specifically state that a covered company must offer an alternate hiring procedure. Additionally, after 30 days of receiving a written request for such information, a covered employer must post a notice on its website’s careers or job sections or deliver written notice in person, via U.S. mail, or through email. The skills and qualities required for the position that the AEDT will consider when evaluating an applicant or employee should also be specified in this notice. Further, the employer must provide information regarding the kind of data acquired for the automated employment decision tool, the source of that data, and the employer’s or employment agency’s data retention policy, if it isn’t already available on the employer’s website.

The important phrases listed below that were not specified in the AI Law are also defined in the proposed rules:

  1. To “screen” someone means to decide if they should be chosen or progress in the recruitment or promotion process.
  1. The term “selection rate” refers to the frequency with which members of a category are either chosen to advance in the recruiting process or given a categorization by an AEDT. One way to determine this rate is to divide the total number of people in the category who applied for a job or were under consideration for a promotion by the number of people in the category who advanced or were classified.
  1. The term “impact ratio” refers to i) either the selection rate for a category divided by the selection rate of the most selected category or ii) the average score of all respondents in a category divided by the average score of those in the highest scoring category.”

By regulating the use of AI in companies to reduce hiring and promotional bias, New York City is trying to follow in the footsteps of Illinois. Though Illinois has restricted the use of AI analysis of video interviews since 2020, New York City will be the first in the nation to include the employment process as a whole. It intends to address concerns raised by the United States Equal Employment Opportunity Commission and the United States Department of Justice that “blind reliance” on AI technologies in the recruiting process might lead to corporations violating the Americans with Disabilities Act.

AI recruitment solutions are created to help HR teams at every stage of the hiring process, from posting job advertisements on job boards to screening candidate resumes to choosing the best remuneration to give. AI can rapidly screen every application for prerequisites like educational background or experience level. It can eliminate a recruiter’s (unconscious) prejudice for or against a candidate. Additionally, HR teams might find information gaps in job descriptions that prevent hiring the best applicant. Naturally, the objective behind employing AI-based hiring tools (automated employment decision tools) was to assist employers in finding candidates with the appropriate training and expertise.

Read More: How do geo-tagged data on Har Ghar Tiranga Website threaten an Orwellian Reality?

While the general idea was AI would equip firms to sift large amounts of resumes more cost-effectively, strategically, and efficiently, in practice, this could foster biased hiring due to its dependence on unconscious prejudiced selection tendencies like language and demographics. This reveals that there is a dearth of publicly available data demonstrating how various technologies undermine attempts to increase diversity in the hiring landscape by developing a bias themselves. 

For instance, Amazon allegedly discontinued a machine learning and AI-based hiring initiative in 2018 when it was discovered that the algorithm was prejudiced toward women. It appears that Amazon’s AI model was developed to screen applicants by looking for trends in resumes that the business received over a ten-year period. Since males had made up the majority of those applicants, the algorithm inferred that men were preferred over women – resulting in bias.

When individual data becomes tainted with minute overlapping biases, the reality of having a small data sample makes it hard for AI to understand what should be the preferred outcome, resulting in ambiguous outcomes. Apart from that, it is possible that AI becomes more adept in carrying out these skewed behaviors the more often it performs them. It develops a cycle that institutionalizes bias, which defeats the purpose of relying on AI tools for hiring.

This is why companies like Walmart, Nike, CVS Health, and others are eradicating bias from their own hiring algorithms as part of The Data & Trust Alliance.

With the new New York City regulation going into effect next January, HR experts hope bias due to automated employment decision tools can be kept in check to an agreeable extent.

Advertisement

Neosoma HGG receives 510(k) clearance from the FDA

Neosoma HGG receives 510(k) clearance from the FDA

Neosoma HGG has received 510(k) clearance from the Food and Drug Administration (FDA). Neosoma HGG is an artificial intelligence-powered technology that may facilitate greater accuracy in the assessment of high-grade gliomas (HGGs) on brain magnetic resonance imaging (MRI).

Offering longitudinal tracking of patients with HGGs, Neosoma HGG provides tumor segmentation, facilitates imaging for 3D geometric analysis, and performs volumetric measurements, according to Neosoma, the manufacturer of Neosoma HGG.

In performance testing, Neosoma said the Neosoma HGG exceeded the assessment of individual neuroradiologists with a 95.5 percent accuracy rate in measuring the volume of HGGs at different points during a patient’s treatment course.

Read More: Meta’s ‘Horizon Worlds’ Inundated With Several Quality Issues

The company said the detailed objective measurements provided by Neosoma HGG aid in operative planning and the assessment of post-op progress, as well as the monitoring of chemotherapy treatment effectiveness.

“Clinicians usually debate the results of brain MRIs and whether the brain tumor is stable, responding to treatment, or progressing. Neosoma HHG will give us the objectivity needed to make our decisions easier and more accurate,” added Isabelle M. Germano, MD, MBA, FACS, a professor of neurosurgery and the director of the Comprehensive Brain Tumor Program, at Mount Sinai Medical Center.

Advertisement

AWS Open-Sourced its EC2 Trn1 Instances Powered by AWS-Designed Trainium Chips

aws open sourced ec2 trn 1 instances

Amazon Web Services (AWS), a leading cloud service provider, open-sourced its EC2 Trn 1 instances built for leveraging high performance powered by the AWS-designed Trainium Chips. For workloads like NLP, semantic search, recommendation engines, fraud detection, and others, Trn1 instances on AWS offer the quickest time to train popular machine learning models. 

Many companies have developed, trained, and deployed machine learning models to power applications that revolutionize their operations and customer experiences. To increase accuracy, these machine learning models are getting more complicated and consuming larger amounts of training data. 

The models are run across thousands of accelerators, and as a result, the cost of training them increases. With up to 50% less cost to train deep learning models than the most recent GPU-based P4d instances, new Trn1 instances powered by the Trainium processors provide the best pricing performance and the fastest machine learning model training on AWS.

Read More: Cadence Plans to Apply Big Data to Optimize Workloads

Trn1 instances are built on the AWS Nitro System, a combination of AWS-designed hardware and software advancements that streamline the delivery of isolated multi-tenancy, private networking, and quick local storage. 

David Brown, VP of Amazon EC2, said that the company is looking forward to enhancing AWS Inferentia, its machine learning chip, and AWS Trainium, its 2nd-gen machine learning chip. He also said, “Trn1 instances powered by AWS Trainium will help our customers reduce their training time from months to days while being more cost efficient.”

Advertisement

Amazon Halted Testing its Robot Scouts

amazon halted its robot scouts

The E-commerce giant Amazon has halted testing its home delivery robot “Scouts,” Amazon’s latest attempt to make delivery autonomous. According to Amazon, the battery-powered robots are a part of a plan to reduce greenhouse gas emissions from its delivery operations. 

However, the company is lowering its experimental efforts due to the slump in sales. The company is only reacting to the slumped growth in its retail segment by delaying some of its less responsive initiatives, as per CEO Andy Jassy.

Robot Scouts, the six-wheeled, cooler-sized vehicles intended to deliver goods to the front door, will reportedly be placed on hold for the time being, but Amazon has said that it may revisit the concept in the future. Although multiple witnesses stated that the robots had trouble getting around debris or other small objects on the pavement, the company declined to elaborate on why the halt occurred.

Read More: Google and Amazon criticize Microsoft over cloud computing changes

The robots were designed to stop at a front door during the testing and snap open their lids to allow a customer to pick up a present. Before extending trials to Southern California and Tennessee, Amazon began testing its color-sized robots on the streets of suburban Seattle. 

Alisa Carroll, an Amazon Spokesperson, said that the company learned through feedback that the program was not meeting customer expectations. 
Amazon also discontinued Glow, its child-based gaming service with video calling features, not only the robot scouts. The company is also planning to shut down Amazon Care, its telehealth service, after acquiring One Medical.

Advertisement

Google AI Introduced Frame Interpolation for Large Motion (FILM)

google ai introduced FILM

Google AI has been working on frame interpolation and has introduced a new neural network, Frame Interpolation for Large Motion (FILM). Frame interpolation is the process of synthesizing in-between images from pre-existing ones. The technique is frequently used for temporal up-sampling to accelerate video refresh rates or produce slow-motion effects.

Google published “FILM: Frame Interpolation for Large Motion” at the ECCV 2022, presenting a new technique to generate high-grade slow-mo videos from duplicate images. FILM is efficient for both large and small motions with state-of-the-art outcomes.

Google iteratively invoked the model to output in-between images at the inference moment. 

The FILM model generates a middle image from two input images. There are three parts to the FILM model: 

  1. A feature extractor uses deep multi-scale (pyramid) features to summarise each input image.
  2. A bi-directional motion estimator calculates pixel-wise motion (i.e., flows) at each pyramid level.
  3. A fusion module that generates the final interpolated image.

Read More: Google AI digitizes sense of smell by mapping scent of molecules

Typically, multi-resolution feature pyramids and hierarchical motion estimates are used to accommodate significant motion. Small and swiftly moving items challenge this technique as they tend to vanish near the pyramid’s base. 

The above components help solve this problem by using a shared motion estimator and creating a network with fewer weights. Shared weights increase the number of pixels available for large motion supervision by enabling the interpretation of minor motions at deeper levels to be the same as large motions at shallow levels.

Following feature extraction, FILM uses pyramid-based residual flow estimates to determine the flows from the center image—which has not yet been predicted—to the two inputs. The model aligns the two feature pyramids after estimating the bi-directional flows. Stacking the two aligned feature maps, the bi-directional flows, and the input images at each pyramid level create a concatenated feature pyramid.

Advertisement