Reddit has introduced a new method to accept cryptocurrency payments using Community Points. The platform has partnered with a popular crypto exchange, FTX, to introduce new crypto-enabled benefits for Reddit Community Points.
Reddit Community Points are considered as a measure of reputation in communities of users. They are displayed next to usernames in subreddits to make the most prominent community contributors stand out from the crowd. Community Points are present on the Arbitrum Nova blockchain. This allows users to take their reputation anywhere they are recognized on the Internet.
The integration of Reddit with FTX Pay will allow users to buy Ether cryptocurrency from supported Reddit apps. The crypto can then be used to pay blockchain networks the fees for their on-chain Community Points transactions.
Community Points will allow people to create Special Memberships in the community that users can purchase with points. Special Memberships unlock multiple features. Users will also be able to tip someone for commenting or posting. Community Points can also be sent to Redditors with a crypto Vault.
Further, users will be able to run weighted polls to make big decisions in their community, add animated Emoji, and embed GIFs. The polls will give a more significant voice to people who have contributed extensively to the community. The more Community Points someone has, the more weight their vote will carry.
Artificial intelligence (AI) systems to prevent elephant deaths from train collisions will soon be installed by the forest department of Tamil Nadu at the Madhukkarai and Walayar train tracks in Coimbatore. Reputable companies have already bid on installing AI systems in wildlife settings.
The two railway lines passing through the Madukkarai forest range between Madhukkarai and Walayar have been hotspots for elephant crossing and mishaps due to their collisions with speeding trains.
The artificial intelligence system in the area will warn the officials about elephant crossings. First, the problem areas would be divided into three zones. The red zone will be the first 50m area from the center to the track. The following 50m will be identified as the orange zone, and a further 50m will be the yellow zone. A luminous alert and an acoustic alert (hooter) will be installed in the console room and at all sensor towers.
If an elephant enters the yellow zone, an alert will be generated in the console room and passed on to the forest watchers. If the elephant crosses the orange zone, alerts will be sent to forest watchers, guards, railway station master, and forest range officers.
In circumstances where the elephant enters the red zone, emergency alerts will be sent to the divisional engineers of railways and district forest officials, who will intimate the loco pilot. Details of the elephant’s location and distance from the track will be conveyed to the loco pilot in advance to take the appropriate action.
Hyundai Motor has announced it will spend $424 million to build artificial intelligence (AI) research center in the US. The center aims to bolster the company’s edge in robotics technology.
Hyundai’s three key auto affiliates, viz. Hyundai Mobis, Kia, and Hyundai Motor will invest $127.1 million, $84.7 million, and $211.9 million, respectively, for the AI center. The center will be located in Boston.
The company will invest its resources across the technical areas of athletic AI, cognitive AI, and organic hardware design. According to the company, each of these disciplines will contribute to the progress in advanced machine capabilities.
The AI center is tentatively named as the Boston Dynamics AI Institute. It will be headed by Marc Raibert, former chief and founder of Boston Dynamics, which is a US-based robotics company Hyundai acquired last year.
Chairman of Hyundai Motor, Euisun Chung, has been investing heavily in developing automotive software and related mobility technologies. This includes Software Defined Vehicles (SDVs), which is a concept that the software capability of the vehicle will define the quality of the car and driving.
Hyundai said it would also establish a new software center in South Korea to accelerate expansion into electrification, self-driving, and other advanced auto technologies. As part of the plan, Hyundai has recently acquired a Seoul-based autonomous driving software and mobility platform startup, 42dot, for $211.1 million.
TikTok has rolled a new in-app text-to-image AI generator called AI Greenscreen that lets users type in a prompt and receive an image that can be used as the background in their videos. The effect can be accessed through TikTok’s camera screen.
The launch of the new filter comes after the launch of increasingly popular OpenAI’s DALL-E 2. However, TikTok’s AI generator is quite basic compared to the output from popular text-to-image models, including Google’s Imagen and DALL-E 2. AI Greenscreen produces abstract imagery, whereas Imagen and DALL-E 2 can create photorealistic imagery.
TikTok has a range of suggested prompts that come up when the user selects the effect. Some of them are: Hidden village in the mountains, Erupting rainbow volcano, and Snorkeling in a purple ocean. These prompts visualize TikTok’s focus on abstract imagery for its AI generator. Users can play around with their own prompts to get similar abstract imagery.
Currently, the filter is being used for a few popular TikTok trends, including one where the user enters their name into the generator to see what their aesthetic looks like. Another trend includes users entering their birthdays into the generator.
Since the new text-to-image AI generator is available to millions of users, it has some limitations considering the community guidelines. The company is positioning it as a fun way to create backgrounds for TikTok videos, as it is a helpful tool for creators. Since DALL-E 2 and Imagen are not widely available, TikTok’s new effect offers an alternative for users who need to utilize the text-to-image AI generators.
Cyberattacks are one of the most significant issues affecting businesses and customers. It has degraded the computer systems in organizations by increasing malware attacks, frauds, data theft, and more. Due to the explosion in online crime and cyberattacks, people are developing an interest in protecting their systems and businesses by learning different cybersecurity methods. If you want to uplift your cybersecurity knowledge or secure your organization by educating your employees in cybersecurity, this article will help you. This article gives you a list of freely available cybersecurity courses that will help you build a career in cybersecurity.
Introduction to Cybersecurity Tools and Cyberattacks: Coursera
Offered by IBM Security Learning Services, the Introduction of Cybersecurity Tools and Cyberattacks course on Coursera gives you the background required to understand the basics of cybersecurity. This course also guides you on the history of cybersecurity and the types and motives of cyberattacks. It also teaches you the key terminologies, basic concepts, and tools that will be examined in cybersecurity.
This course teaches you the terminologies such as firewalls, antivirus, cryptography, penetration testing, and digital forensics. This course lasts for 20 hrs, where you can give 5 to 6 hrs per week. After completing this course, you can take the specialization programs like IT Fundamentals for Cybersecurity specialization and IBM Cybersecurity Analyst Professional Certificate.
Introduction to Information Security: Great Learning
Great Learning Academy offers the Introduction to Information security course to teach you about the fundamentals of computer security and the attacks that affect your systems. This course starts with briefing the attacker’s lifecycle and explaining different case studies on well-known companies, attack types, causes, and the ways to prevent attacks. This course also guides you on breaches like target breaches and password breaches.
As this course is for beginners, you do not need to have any previous knowledge of cybersecurity. After completing this course successfully, you can enroll in the more advanced cybersecurity courses and begin your career in cybersecurity. This course’s duration is 1.5 hrs and has more than 92705 enrollments.
Offered by the University of Washington, Introduction to cybersecurity is another introductory course on edX for learning the basics of cybersecurity This course is ideal for learners curious about the world of internet security. It also teaches you to identify and differentiate the threat actors and their motivations.
This course lasts for 6 weeks, where you can give 2 to 4 hrs per week. After completing this course, you can gain the cybersecurity landscape’s national (USA) and international perspectives. After completing this course, you can also enroll in the professional certification program ‘Essentials of Cybersecurity.’
Building a Cybersecurity Toolkit is another introductory course in cybersecurity offered by the University of Washington on edX. This course allows you to learn and develop skills and characteristics that expand your cybersecurity knowledge. This course also guides you on identifying tools and skills necessary for the Professional cybersecurity toolkit.
This course is instructed by David Aucsmith, the Senior Principal Research Scientist in the Applied Physics lab at the University of Washington. With this course, you can match appropriate tools for different cybersecurity management purposes.
The course duration is 6 weeks where you can give 2 to 5 hrs per week. There are no prerequisites for this course, so someone curious and interested in cybersecurity can enroll.
Introduction to Cybersecurity for Businesses: Coursera
Offered by the University of Colorado on Coursera, the Introduction to Cybersecurity course for businesses was developed to provide you with the practical aspects of cybersecurity. This course teaches you cybersecurity so that anybody can understand it easily. It is a beginner-level course that guides you on how businesses secure their networks.
The syllabi of this course include practical computer security, the CIA Triad, cryptography, digital encryption, and more. This course has an excellent rating of 4.7 and has enrollments of more than 39,206 students. This course is approximately 5 months and does not require prior knowledge.
Introduction to Cybersecurity is another introductory cybersecurity course for beginners offered by Uttarakhand Open University, Haldwani, and IGNOU via Swayam on Class Central. This course is a foundation for cybersecurity specialization for all Indian Universities.
The course syllabus consists of video lectures that experts design. You can download the video lectures and study materials at your own pace. After completing the course’s syllabus, you can ask doubts to the instructor, who is available online. After completing the course, students must undergo an online certification examination. The course’s duration is 12 weeks, where you will study a new section every week.
Introduction to CISSP Security Assessment, Testing, and Security Operations: Simplilearn
Skillup offers an introductory course,’ Introduction to CISSP Security Assessment, Testing and Security Operations’ on cybersecurity, that teaches you to develop a comprehensive understanding of security assessment, testing, and security operations. It also guides you through Penetration testing, recovery and backup, assets and malware management, log management, and transactions.
This course has an excellent rating of 4.7 and has more than 3154 enrollments. After completing this course successfully, you can clearly understand the keys, components, tools, and methods of CISSP domain 6 and CISSP domain 7. The duration of this course is 4 hrs that, consists of 2 lessons and sub-topics. You can enroll for this course with a free trial account of Simplilearn in 90 days. But after 90 days you need to pay for this course.
The cybersecurity course offered by CEC via Swayam on Class Central teaches you all the essential aspects of cybersecurity. This course is for postgraduate students, working professionals, and those interested in cybersecurity. After completing this course, you will learn all the essential aspects of cybersecurity, including cyberlaw. It also gives you an overall overview of the legal implications of cyber crimes, scams, and fraud.
The duration of this course is 15 weeks, which teaches a new topic every week. With an excellent rating of 2.4k, you do not need to have any prerequisites. However, you must have the basic computer knowledge to understand the basic terminologies and concepts in the course quickly.
Cybersecurity Tools, Techniques, and Counter Measures: Class Central
Created by IGNOU and Dr. Babasaheb Ambedkar Open University on Class Central, the cybersecurity Tools, Techniques, and Counter Measure course is another introductory course that helps you understand the cybersecurity landscapes theoretically and practically.
This course provides cybersecurity awareness and training that increases the chances of catching a scam or attack. This helps to minimize the damage to the resources and ensure the protection of the information. After completing this course, you can have a lot of career opportunities in cybersecurity sectors such as Cybersecurity Analysts, Network/Application Security Analysts, Security Automation, Cybersecurity Practitioners, Cyber Defense analysts, Penetration Testers, and more.
This course also teaches you to safeguard the confidentiality, integrity, and availability of the information in the computer systems. It also guides you through the topics like network security, cryptography, risk management, physical security, architecture, and more. This course’s duration is 12 weeks, consisting of different topics and sub-topics weekly.
(ISC)² Education and training offer the Systems and Application Security course on Coursera, enabling you to learn and understand computer code that can be malicious. This course also guides you on technical and non-technical attacks in your systems. It teaches you how to protect your systems from different cyberattacks.
This course gives you an overview of topics such as endpoint device security, securing extensive data systems, cloud infrastructure security, and securing virtual environments. The duration of this course is of 17 hrs approximately, where you need to give 3 to 5 hrs per week. It is a beginner-level course and has a good rating of 4.8.
Offered by Great learning Academy, the Introduction to Cybersecurity course will introduce you to the world of cybersecurity. It also guides you on different forms of cyberattacks and how to develop a security system and cryptography. This course teaches you the basic concepts of cybersecurity, such as cyber threats, vulnerabilities, and attacks on systems, networks, and data.
This course encourages you to learn the design of the security systems, essential concepts in cybersecurity, types of cryptography, attacks of cryptography, and different case studies of cybersecurity. It is ideal for anyone interested in cybersecurity. After completing this course successfully, you get the certificate you can share on your resume or social media. The duration of this course is 2.5 hrs, consisting of 19 sections.
The revolutionary advancements in artificial intelligence are transforming almost all spheres of modern human existence. Amongst the widespread prevalence of artificial intelligence, the life-changing applications in the healthcare sector are particularly remarkable. Most of the greatest healthcare innovations today have a significant component of AI. From early diagnosis of life-threatening diseases to performing complex surgeries, artificial intelligence is contributing profoundly to it all.
Although AI has been a reasonably familiar technology for the healthcare sector of the western world, it is still gaining ground in India. And in laying that familiar groundwork in India, artificial intelligence-based startups focused on healthcare are doing a remarkable job. This article lists some of the pioneer AI healthcare startups in India which are transforming the face of medical care in the country.
1. HealthifyMe
HealthifyMe is a Bengaluru-based digital health and wellness company that was founded by Tushar Vashisht, Sachin Shenoy, and Mathew Cherian in 2012 with the aim of introducing digital healthcare to Indians. The company provides an app that uses an virtual assistant ‘Ria’, world’s first AI nutritionist, to solve users’ queries around fitness, nutrition, and health in 10 different languages. Moreover, the app provides dietary recommendations and uses artificial intelligence to track calorie intake. HealthifyMe offers professional advice on nutrition and fitness to people from all spectrums, from healthcare professionals to personal users through a premium subscription. The company claims to have access to one of the largest Indian food databases in the world.
PharmEasy is one of the leading AI healthcare startups in India that offers an application connecting users with pharmacies. In 2015, PharmEasy was founded by Dharmil Sheth, Mikhil Innani, and Dhaval Shah in Mumbai. The smartphone-based app connects users with these pharmacies to make seamless medical deliveries and provides services such as teleconsultation and diagnostic test sample collection. PharmEasy app uses machine learning and big data tools, including Tensorflow, Hadoop, Spark, and Hive. It also uses several data crunching and analytics solutions. PharmEasy has partnered with more than 80,000 pharmacies across 1200+ cities in India.
3. Qure.ai
Qure.ai is a breakthrough artificial intelligence solution provider that is enhancing imaging accuracy and improving healthcare outcomes with the assistance of machine learning-supported tools. This Mumbai-based company is one of the leading AI healthcare startups in India that was founded in 2016. Qure.ai employs deep learning technology to provide automated interpretation of radiology exams like X-rays, CT scans, and ultrasounds, enabling faster diagnosis and treatment of malignant diseases. The tools can also outline and quantify specific information such as lung disease patterns. Qure.ai also offers digital pathology solutions that can distinguish malignant biopsies from benign ones and grade a variety of tumor types.
4. Axtria
Founded in 2010, Axtria is a Noida-based company that provides online sales and marketing management tools for life science industries. The company develops data analytics and software platforms that help clients with the management of several aspects, including customer, revenue, risk, and supply chain. Axtria helps healthcare organisations and institutions with services such as planning, lead scoring, reporting, forecasting optimization, and visualization. Axtria’s risk solutions include portfolio risk management, regulatory compliance, operational risk management, and risk strategy development. The Noida-based company also provides healthcare sector with services such as HR analytics, spend analytics, and supply chain analytics.
5. SigTuple
Another name among leading AI healthcare startups in India is SigTuple, which has been providing AI-based healthcare diagnostic solutions since 2015. SigTuple democratizes microscopy by automating it through advanced AI and robotics. Their machine learning and deep learning platforms, such as Manthan, can detect and predict the chances of the person having a particular disease from medical diagnostic images. SigTuple’s peripheral blood smear analyzing solution, Shonit, provides image processing-based solutions for a differential blood count. The screening of diseases like malaria and anemia on Shonit are under clinical trials.
6. Niramai
Bengaluru-based Niramai Health Analytix is one of the pioneer AI healthcare startups in India. Founded in 2016 by Geetha Manjunath and Nidhi Mathur, the company provides a novel software-based medical device Thermalytix that uses an AI-based high-resolution thermal sensing device for the early-stage detection of breast cancer. The software-based medical device, which is automated, low-cost, and portable, is making cancer screening possible in clinics across India. The core technology of the device is based on machine learning algorithms. Niramai’s solution is also being used for regular preventive health checkups and large-scale screenings in rural and semi-urban areas.
7. Perfint Healthcare
Chennai-based AI healthcare startup Perfint Healthcare provides solutions for image-guided interventional procedures, laying special emphasis on oncology and pain care. Radiologists across the world are using Perfint’s robotic solutions for biopsy, drug delivery, ablation, drainage, fine needle aspiration, and varied pain care procedures for both cancerous and non-cancerous pain. The company’s products include Robio EX, Maxio, and Navios. Robio EX is a CT & PET-CT guided robotic positioning system, whereas Maxio provides intraoperative guidance and post-procedure verification support. Navios is a computer-based workflow assistance solution for CT-guided percutaneous ablation procedures.
8. HealthPlix
HealthPlix is another leading AI healthcare startup in India that is focused on empowering healthcare professionals to make the right medical decisions using softwares specially customized for them. Through their assistive AI-powered EMR software, doctors can drive better patient health outcomes by providing clinical decision support at the point of care. Healthplix also provides an electronic medical record solution for chronic care management. Its features include e-prescription generation, lab management, and billing. It also has dashboards that provide AI and machine learning-based insights on marketing, clinical test hypothesis, finance, and treatment outcomes.
9. Dozee
Established in 2015, Dozee is a Bangalore-based AI healthcare startup that provides contactless health monitors that silently track heart, respiration, sleep patterns, stress levels, cardiac contractions, apnea, and more while sleeping. Its artificial intelligence algorithms enable early detection of any health deterioration. Medical professionals can access medical information remotely on Dozee’s patient monitoring systems and app. Dozee’s integration with pulse oximeters were used in providing continuous remote monitoring to Covid-19 patients and keeping doctors safe from exposure to infection during the pandemic. Dozee has been endorsed by Bill & Melinda Gates Foundation and the Indian government.
10. BlueSemi
Founded in 2017, the consumer health tech focused startup BlueSemi is one of the pioneer AI healthcare startups in India. The Hyderabad-based company provides IoT solutions to manage healthcare, including anti-counterfeiting and temperature measurement solutions. BlueSemi also offers a wireless temperature scanner for fever screening. Its energy harvesting device facilitates power independence for wearables and autonomous vehicles. Its other features include unique identification coding, data analytics, integration, customized access, marketing, and other capabilities. It has use cases in hospitals, medical institutions, wearables, retail, autonomous vehicles, smart homes, etc.
11. Tricog
Tricog is a Bangalore-based AI healthcare startup and one of the world’s largest predictive healthcare analytics firms, which was founded in 2014. Tricog was founded by interventional cardiologist Dr. Charit Bhograj to leverage deep learning-based medical and technology expertise to provide virtual cardiology services to distant clinics. Tricog has evolved into a technology-driven cardiac care company run by a highly experienced team. Tricog’s flagship product, InstaECG, enables professionals to create a network of 3,000 ECG diagnosis points at various medical centers across the globe. Today, Tricog’s AI has a data store that exhibits 200+ cardiac conditions, which significantly improves the detection of rare cardiac disorders.
12. DocTalk
DocTalk is a Mumbai-based AI healthcare startup that provides a software application that keeps track of medical reports and prescriptions and allows users to easily share them with doctors. Founded in 2016 by Akshat Goenka and Vamsee Chamakura, the startup focuses on an AI-based virtual assistant program to enhance the healthcare industry of India. DocTalk also offers subscribers a platform where they can get solutions by speaking with their physicians over the app. The app also allows users to easily save all the medical reports and photos on the cloud, thus enabling users to avail them anywhere digitally.
13. Lybrate
Lybrate is one of the pioneer AI healthcare startups in India, which was established in 2014 as the country’s first online doctor consultation platform. Lybrate provides an online service on an AI-based platform of the same name, through which people are able to connect with doctors and also have a consultation online. Users can also book lab tests and appointments online through the service. Lybrate also provides fitness, skincare, and hair care solutions. They have doctors from a wide range of spectrums across the country. Moreover, Lybrate provides a health consultant bot on Facebook Messenger. The goal of the startup is to create an internet consultation for everyone that is readily available.
14. HealthKart
Founded in 2011, Healthkart is one of the top AI healthcare startups in India, which is based in Gurgaon. It offers many health products and services to assist customers in attaining their fitness goals through its e-health store. Healthkart also provides on-demand healthcare and fitness advice using conventional AI. It is among the largest healthcare e-stores in India. HealthKart provides several healthcare items, including nutritional supplements, diabetes supplies, medical-related equipment, and infant care products. Products can be purchased on the internet and are home-delivered by HealthKart.
This leading healthcare platform was originally launched as a mobile application and a website. Today it has turned into a marketplace that provides many services, including browsing and buying prescribed drugs online.
Today, according to Livemint, India produces more than 692 million internet users and is expected to reach 900 million by 2025. With the significant population and internet growth, according to ResearchandMarkets, the Indian data center market is expected to grow at a compound annual growth rate (CAGR) of 15.07% forecasted for 2022 to 2027.
Many hyper-scale data center operators announced building data center projects in 2021, which was only the beginning. This year many more companies are investing a huge amount of money into data centers to acquire the position of the best data center in India.
To note, this article isn’t providing a ranking of data centers. Here is a list of top data centers in India 2022.
NTT
NTT (Nippon Telegraph and Telephone) Global data centers and cloud infrastructure, also known as Netmagic, is a popular managed hosting and multi-cloud IT solution provider. The company has 160 carrier-neutral, hyper-scale, and high-density data centers serving big firms worldwide across 20 countries, among which 12 are located in India. These 12 data centers are distributed over Mumbai, Bangalore, Chennai, and Delhi, making NTT one of the top data centers in India 2022. In India, NTT is headquartered in Mumbai with eight working data centers, six in Mumbai and two in Navi Mumbai. In addition, two data centers are located in Bangalore and one each in Chennai and Delhi. The current data load of the data centers is 150 MW (megawatt) IT load, and another 250 MW IT load center is under development. NTT Group operates the data centers through NTT Data, an information technology company headquartered in Tokyo, Japan. NTT helps clients with services to accelerate their growth and develop new business models. They aim to transform clients through consulting, industry solutions, business process services, IT modernization, and managed services.
CtrlS
CtrlS is an Indian data center company founded in 2008 to provide services like data center colocation, DC build & consulting, internet bandwidth, managed services, cloud security services, and disaster recovery services. The company is focused on keeping clients’ applications online and data security. It’s been reported that the company had an EBITDA (earnings before interest, taxes, depreciation, and amortization) growth which is almost double the industry standard. On this, Mr. Sridhar Pinnapureddy, founder and CEO of CtrlS stated,’ Our growth is driven by sharp focus on product innovation, customer delight, process efficiencies and a very passionate CtrlS team. Apart from these, we are committed to expanding our global footprint and are making strategic investments into Asia Pacific, Middle East, and North American markets.’ CtrlS is known as Asia Pacific’s largest tier-4 data center and has data center facilities in Hyderabad, Mumbai, Noida, and Bangalore. The 18-year-old Pioneer group also promoted CtrlS which built the largest available infrastructure in the data center world.
Web Werks
Web Werks is one of the top data center companies in India, which offers a range of services providing fast and secure access to data of any complexity. They have six data centers in India, among which three are operating, one each in Mumbai, Pune, and Delhi-NCR, and the other three are under development in Mumbai, Bangalore and Hyderabad. The data center facility of Wen Werks is a high-density, hyper-scale, and AI-powered infrastructure offering the best-in-class service. The company aims to maintain and raise the standards of data center services with new data centers in India. Additionally, Web Werks ensures clients achieve business excellence by addressing their data center needs and problems. Web Werks believes that data centers hold the answer to the future of business. Web Werks’ team consists of data management professionals who solve data needs with colocation that maximizes the efficiency of your work at every turn.
Nxtra
Nxtra Data is one of the best data centers in India 2022 and is a fully owned subsidiary of Bharti Airtel. Nxtra provides coalition, managed services, and cloud services. The company is headquartered in Gurugram and has ten functional core data centers over 7 locations in India, including Noida, Manesar, Mumbai, Pune, Bhubaneswar, Bangalore, and Chennai. Being associated with both the data center and telecom industry for more than two decades, Nxtra provides a platform of hyper-connectivity core and edge data centers across 120+ locations in India alone. They enable businesses to accelerate their digital journey and host applications closer to their customers. The company started investing in multiple large data center parks, and in 2020, Nxtra made a deal with Carlyle Group, one of the world’s largest investment firms, to sell up to 25% of its stake for $235 million.
Microsoft
Microsoft Corporation is a tech giant that produces computer software, consumer electronics, personal computers, and related services. The company is the leading provider of cloud computing services, video games, computer & gaming hardware, search and other online services. It is an American multinational company headquartered in Redmond, Washington, and operates its offices in more than 60 countries. Microsoft is one of the most important IT companies contributing to several software & services, and now building the largest data center in India. Microsoft has well-equipped data centers in three locations, Pune, Mumbai, and Chennai. At the beginning of 2022, Microsoft announced to set up its fourth & largest of all data centers in India in Hyderabad, Telangana, by the end of 2025. Microsoft is investing a total of Rs 15,000 crore in this new data center which will employ 18,000 full-time employees. The Hyderabad data center will be the largest center after the Redmond center.
Adani
Adani Enterprises is the flagship company of Adani Group, an Indian multinational conglomerate company headquartered in Ahmedabad, Gujarat. Adani Group and EdgeConneX, one of the world’s largest private data center operators, came up with a joint venture called AdaniConneX. The aim of AdaniConneX is to empower digital India with a 1 GW (gigawatt) of data center capacity in the next decade. Adani deserved to be on the list of top data centers in India 2022 for its innovative ideas and sheer determination. AdaniConneX focuses on accelerating the client’s digital transformation by being their data sanctuary, providing the levels of transparency, security, scale, and flexibility they need. Today, AdaniConneX has data centers in 7 locations in India, including Chennai, Hyderabad, Mumbai, Noida, Pune, and Vizag.
Reliance Data Center
Reliance data center is one of the top data center companies in India which provides outsourced data center infrastructure for organizations having crucial IT operations. It is a division of Reliance Communications, the mobile network provider company of Reliance industries limited. Currently, Reliance data center has 9 data centers located in Mumbai, Chennai, Hyderabad, and Bangalore. All the data centers are built on proven IT systems & facility platforms and with recognized expertise in the field. The company delivers security, reliability, scalability, networking, applications, and consulting services. Reliance data center not only provides a wide range of services but also has the highest degree of network security and over 6,50,000 sq.ft of hosting space. Companies from the Fortune 1000 trust Reliance due to their vast experience in the data center business in India and come to them for their impeccable record of delivering services.
Sify Technologies
Sify technologies limited is an Indian information communication technology (ICT) service and solution provider. The company delivers end-to-end ICT solutions, including telecommunications, data center, cloud & managed services, and transformation integration & application integration services. Today, Sify is one of the best data centers in India 2022. Sify has matured into India’s largest digital transformation company, which has put a converged ICT ecosystem consisting of the largest multiprotocol label switching (MPLS) Network in India, top data centers, and cloud-managed services. Sify has six maintainable data centers in India located in Mumbai, Delhi, Bangalore, & Chennai, and they connect over 45 data centers across India. The connection helps multiple businesses in different verticals to leverage Sify’s data centers, networks, and security services. These data centers have a single cabinet for multi-MW deployment, a carrier-neutral facility, and a multi-telecom ecosystem, which offers shared & caged colocation, cloud solutions, remote infrastructure management, and migration services.
Equinix
Equinix is an American multinational company specializing in data centers and internet connections. It is the largest global data center and colocation provider that operates a network of 220+ international business exchange (IBX) data centers in 63 metropolitan cities worldwide. Currently, Equinix has only two data centers in India, located in Mumbai, but the reason they made it to the list of data centers in India 2022 is because of their highest level of security and operational reliability with an award-winning portfolio of 20+ years of experience. The company has worked to interconnect industry-leading organizations across a digital-first environment. Recently Equinix announced they are planning to build a 3rd IBX data center in Mumbai with an initial investment of $86 million, including an acquisition of a parcel of land of nearly four acres. This new project will allow Equinix to exponentially grow its ecosystem on the platform Equinix across India, contributing to India’s growing digital economy.
Amazon Web Services
Amazon Web Services (AWS) is a subsidiary of Amazon, an IT service management company providing on-demand cloud computing platforms and APIs to individuals, companies, and governments. AWS continuously comes up with innovative data center designs and systems, implements controls, builds automated systems, and undergoes third-party audits for verification of security & compliance. One prime reason why many want to choose AWS is that the services delivered are unique, as users can opt for development platforms or programming languages they prefer. In the coming two years, AWS is set to open four mini data centers in India, one each in Bangalore, Chennai, Delhi, and Kolkata. AWS’ plan is to open 32 local zones in 26 countries over the next two years, making networking easier and more secure. The local zones will provide cloud services, and some use cases of video streaming, gaming, and applications requiring real-time feedback. These local zones will be the infrastructure to support AWS regions to place compute, storage, database, and other services near populated locations and industry.
The hype around the term artificial intelligence has been skyrocketing day by day, considering the recent advances in the field. It is often self-explanatory that artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. But have you ever questioned who the father of artificial intelligence is?
After playing a significant role in setting up the foundation for the creation of intelligent machines, John McCarthy, a renowned American computer scientist and inventor, was honored with the title Father of Artificial Intelligence. He coined the term Artificial Intelligence and therefore is credited to be the founder of AI. This article sums up John McCarthy’s early life, history of artificial intelligence, and his achievements.
Early Life of John McCarthy
John McCarthy, a renowned computer science giant and a pioneer in artificial intelligence, was born in Boston on September 4, 1927. However, things were not always smooth for this genius. McCarthy was born to a family of European immigrants during the Great Recession. Considering the humble circumstances he came from, little seemed to point toward the fact that McCarthy was to become a successor to Alan Turing and the father of AI. The poor health of McCarthy’s younger brother led his family to settle in Los Angeles after moving throughout the country for a while in search of work opportunities. It was there that McCarthy, who was already outstanding in mathematics, came to know about the California Institute of Technology (Caltech). The future father of artificial intelligence taught himself college-level mathematics by borrowing used textbooks from Caltech students before entering college.
McCarthy worked as a carpenter and fisherman while trying to study at the same time. He was an inventor from a young age. He devised a hydraulic orange-squeezer, among several other things, to help his family out. Despite having health complications, he secured a place in an undergraduate program in mathematics at the California Institute of Technology. When he officially entered Caltech, he had already studied so much independently that his professors allowed him to skip the initial two courses. He graduated in the year 1948 and obtained his doctorate in mathematics from Princeton in 1951. So far, it was clear that McCarthy was an extraordinary mind with exceptional learning capabilities.
Birth of Artificial Intelligence
Early in his life, John McCarthy attended a symposium on ‘Cerebral Mechanisms in Behaviour’, and it was here that the idea to create machines that could think as humans took birth in his mind. He believed that it was possible to develop machines that could embody the problem-solving nature and abstract thinking of human brains. In his formal proposal for the Dartmouth Conference, the first-ever artificial intelligence conference conducted in 1956, McCarthy coined the term. His intention behind the conference was to discuss if there was any way to develop a machine that could think abstractly, develop itself, and solve problems like a human. The father of AI claimed that every aspect of learning or any other attribute of intelligence could be described so precisely that a machine can be made to simulate it in principle.
The father of AI defined artificial intelligence as the engineering science of making intelligent machines. At the conference, he established the objectives that he would pursue throughout his career. Ten computer scientists attended this conference. The inaugural text for the conference was written along with two prestigious scientists, Claude Shannon and Marvin Minsky, who soon abandoned the study to focus on mathematical or computational theorizing. However, McCarthy is recognized as the father of artificial intelligence for two reasons: pioneering the field of AI to turn it into a new area of research and continuing to provide evidence for its development for half a century.
Achievements of John McCarthy
As mentioned earlier, John McCarthy coined the term Artificial Intelligence, laying down the roadmap for it to become a field of study. He was later joined by Marvin Minsky in 1959 at MIT, where McCarthy was a research fellow. In following his quest to build machines that could imitate human logical thinking, McCarthy’s next breakthrough came when he proposed ‘Advice Taker’ in his research paper ‘Programs with Common Sense’ in 1958. Advice Taker was a hypothetical computer program that would use logic to represent information in a computer. It was the first step toward the concept of using logical reasoning to enhance artificial intelligence.
During the same time, the founder of artificial intelligence invented a new programming language called LISP, which is still used as a language in the field of artificial intelligence. John McCarthy gained much recognition between the 1950s and 1960s because of his stunning concept of time-sharing. Experts say that his concept of using one central computer to store everything and sharing across multiple systems was the foundation idea for the present-day cloud-based storage.
There are three significant early time-sharing systems that are credited to the father of artificial intelligence, John McCarthy. They are the Compatible Time-sharing System, BBN Time-sharing system, and Dartmouth Time-Sharing System. Le Earnest, McCarthy’s colleague at that time, had explained that the internet would have taken longer to come into existence if it were not for McCarthy’s invention of time-sharing systems.
McCarthy also invented the ‘garbage collection ‘method for solving problems in the programming language LISP around 1959. This method attempted to claim memory that was occupied by objects and was not being used by the program. This invention made way to further simplify manual memory management in LISP. Later on, John McCarthy became a distinguished member of the ACM Ad Hoc Committee on Languages at the International Federation for Information Processing (IFIP).
Between 1978 and 1986, McCarthy spent his time developing the circumscription method of non-monotonic reasoning. As complicated as the term sounds, Circumscription was a method of formalizing the common-sense assumption of things as expected, unless specified otherwise. In 1982, news surfaced about McCarthy’s new idea of Space Fountain. It revolved around the idea that a long tower would be extended into space. The tower would be kept vertical by the outward force of a stream of pallets emitted from the earth. This would be done with a conveyor belt to bring these pallets back to earth.
Accolades
Throughout his journey, the significance of McCarthy’s contributions to the field of artificial intelligence is evident. His pioneering work has been recognized globally and has been awarded several accolades. Some of the notable accolades to the name of the founder of artificial intelligence are:
Turing Award by Association for Computing Machinery (1971).
Kyoto Prize (1988).
USA National Medal of Science in Statistical, Computational Sciences, and Mathematics (1990).
Benjamin Franklin Medal in Cognitive Science and Computers by the Franklin Institute (2003).
John McCarthy was awarded the title of Stanford Engineering Hero (2012).
Last Days
Near the end of the research stage of his career, in 1978, McCarthy had to give up on his purist idea of artificial intelligence. He said artificial intelligence needs 5 Faradays, 2 Maxwells, 1.7 Einsteins, and the funding of 0.3 Manhattan Projects to succeed. During an interview, the founder of artificial intelligence said that if it takes 200 years to finally achieve artificial intelligence, and then, at last, there is a textbook that explains how it is done, the most challenging part of that textbook to write will be the part that explains why people did not think of it 200 years ago. At the age of 84, the father of artificial intelligence left the world on October 24, 2011. A professor and an inventor, John McCarthy, defined and dominated the field of artificial intelligence for more than five decades.
Quotes from John McCarthy
“He who refuses to do arithmetic is doomed to talk nonsense.”
“All travel is, after all, a journey in time and mind. Physical landscapes are a mirror of, or perhaps a key into, our inner landscape.”
“Never abandon a theory that explains something until you have a theory that explains more.”
“With no more than six levels of misquotation, any statement can be made to say whatever you wish.”
Machine learning is a booming industry, and reports by Fortune Business insights proved that the global machine learning market exhibited a higher growth of 36.1% in 2020 compared to 2019. This gained trend in machine learning has resulted in high demand for automated machine learning (AutoML) tools, which makes the use of ML algorithms & models easier. The goal of AutoML tools is to efficiently automate all regular, manual, and tedious workloads of ML implementations. It provides the techniques to automatically find the best performing ML model for a given dataset. Today, several AutoML software and platforms are available online for aspiring AI & ML developers and professionals.
AutoML tools enable users to train and test their models with minimal domain knowledge of either machine learning or their data.
This article provides a list of the best AutoML tools. To be noted, no ranking of AutoML tools is done.
Auto-Sklearn
Auto-Sklearn is an open-source automated machine learning tool built around Sklearn. Sklearn (Scikit-learn) is an open-source machine learning library built upon Scientific Python (SciPy), providing a range of supervised and unsupervised learning algorithms. Conventionally, the extensions or modules in SciPy are named SciKits, thus the name Scikit-learn. Auto-sklearn was first introduced in an article named ‘Efficient and robust automated machine learning’ in 2015. Later in 2020, the second version of Auto-Sklearn was developed on GitHub and presented in the paper ‘Auto-Sklearn 2.0: The Next Generation’.
Auto-sklearn processes the tasks like feature selection, data preprocessing, hyperparameter optimization, model selection, and evaluation. This toolkit is a drop-in replacement for sklearn estimators and classifiers. It works on ML problems by defining them as CASH (Combined Algorithm Selection and Hyperparameter optimization) problems. Auto-sklearn works to automatically search for the right algorithm to implement for a dataset and apply optimization techniques to its hyperparameters. The automated process is based on the usage of Bayesian optimization with meta-learning. The goal is to find the optimal model pipeline and ensemble from the individual model pipelines. Auto-sklearn consists of 15 classification algorithms, 14 feature preprocessors and performs data scaling, encoding categorical parameters, and also handle missing values.
Auto-PyTorch
Auto-PyTorch is one of thebest AutoML tools, a machine learning automation toolkit built on top of PyTorch. PyTorch is an open source ML framework formulated on the Torch library, with applications in computer vision and natural language processing (NLP). The focus of the AutoML framework is enhanced with Auto-PyTorch by jointly combining optimization of traditional ML pipelines and the neural architecture. It has a similar API to Auto-sklearn and thus, requires only a few inputs to fit a DL pipeline, and it was designed to support tabular data and time series data. The workflow of Auto-PyTorch is a combined effort of multi-fidelity optimization with portfolio construction for meta-learning and ensembling of deep neural networks. Auto-PyTorch features were explained in the papers ‘Auto-PyTorch Tabular: Multi-Fidelity metalearning for Efficient and Robust AutoDL’ and ‘Efficient Automated Deep Learning for Time Series Forecasting.’
Auto-Keras
Auto-Keras is an open-source software library that implements AutoML for deep learning models using the Keras API. Keras is a high-level API that runs on top of TensorFlow and is written in Python. Auto-Keras automatically searches for architecture and hyperparameters of DL models by using Keras models via TensorFlow tf.keras API. The searching process in Auto-Keras is known as neural architecture search (NAS) and can be labeled as a modeling task. DATA Lab developed Auto-Keras intending to make machine learning accessible to everyone.
Auto-Keras works on a simple and effective way of finding the top-performing models among various predictive modeling tasks, making it one of the best AutoML tools. It supports several tasks, including classification & regression of images, text, and structured data. The current version is a pre-release version of Auto-Keras, as this package is still evolving. Thus, tasks like time series forecasting, object detection, and image segmentation are under development and will be available in the future version of Auto-Keras. It provides an easy-to-use interface where the user is only required to specify the location of data and number of models to try and is returned the most appropriate model that can achieve the best results. These days the usage of Auto-Keras is at the user’s own risk if any loss of data and libraries occurs because it is not providing warranties on the ‘as available’ basis.
Google Cloud AutoML
Google Cloud AutoML is an automated machine learning software that lets you train custom ML models without coding. Google Cloud AutoML was announced as a suite of machine learning products by Google in 2018. It provides simple, secure, and flexible products with an easy GUI (graphical user interface). One of the latest products in Google Cloud AutoML is Vertex AI which helps you to build, deploy, and scale ML models faster. It contains a pretrained and custom tooling within a unified AI platform. The advantages of Google Cloud AutoML are that it builds with groundbreaking ML tools powered by Google, deploying the models faster with 80% fewer lines of code, and using of MLOps tools for easy management of data and models.
DataRobot
DataRobot stands out among thebest AutoML toolsbecause it is an AutoML platform that manages and simplifies complex enterprise workflows. You can use the platform for any function to make predictions, perform what-if analysis, and automate & optimize model creation. It also helps executives manage value by giving real-time insights into how many models are running in production. DataRobot was invented by Jeremy Achin and Tom de Godoy in 2012 to automate tasks needed to develop artificial intelligence and machine learning applications.
The features of DataRobot include data formatting, feature engineering, model selection, hyperparameter tuning, and monitoring. Additionally, it helps us to understand the important variables for prediction and evaluate the significance of different variables in determining the target variable. The deployment of models in DataRobot is a simple process. Once the instructions are given, it pulls the REST APIs, and then ML operations enable you to check the model’s accuracy and behavior with production data. DataRobot also offers pertained models, a data catalog, and a user-friendly GUI to visualize the entire training and deployment process. Hence, DataRobot provides complete transparency into the workflow and monitors it from a single place.
BigML AutoML
BigML AutoML tool is an automated machine learning platform that offers to build and share the datasets & models. It can be considered software as a service (SaaS) that provides services to build complex ML-based solutions affordably by processing predictive patterns from the data into usable real-life intelligent applications. The functions of BigML include private deployments and a rich toolkit helping the customers to create, experiment, automate and manage machine learning workflows. In 2011, the team of BigML company headquartered in Corvallis, Oregon, and in Valencia, Spain, founded BigML to make machine learning easy and approachable for everyone. The company serves around 169,000 users across the world and promotes machine learning in academia through an education program in over 700 universities.
BigML is one of the best AutoML tools contributing to being a consumable, programmable, and scalable ML platform that gives easy solutions. And automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery, and topic modeling tasks. BigML offers various features in many ways to load raw data, including cloud storage systems, clustering algorithms & visualization, and flexible pricing. The main modes of services BigML AutoML provides are web interface, command line interface, and API.
H2O AutoML
H2O AutoML,also known as H2O, is an open-source distributed in-memory machine learning platform with linear scalability which supports statistical and machine learning algorithms. It is designed to have minimum parameters so that the process of automated training and tuning models is more accessible. All you need to do in H2O is point to the dataset, identify the response column, and specify a time constraint. H2O.ai developed H2O platform with cutting-edge and distributed implementation of many ML algorithms. H2O supports these algorithms in Java, Python, Spark, Scala, and R. It also has a web GUI that uses JSON, and the models trained can be deployed on Spark server, AWS etc.
Among thebest AutoML tools, the advantage H2O has is that it automates the steps of basic data preprocessing, model training & tuning, and ensemble & stacking of models to provide the best-performing model. Thus, users are free to focus on other tasks like data collection, feature engineering, and deployment. There are several impressive functionalities of H2O such as providing necessary data processing capabilities, training a random grid of algorithms, individual models tuned using cross-validation, training two stacked ensembles, returning a sorted ‘Leaderboard’ of all models applied, and easy export of all models to production.
MLBox
MLBox is a dynamic and automated machine learning python library focusing on drift identification, entity embedding, and hyperparameter optimization. It has been developed and used by active community members of MLBox and is compatible with Linux, macOS & Windows operating systems, and Python versions 3.5 to 3.7 & 64-bit only. The main features of MLBox are to provide fast reading & distributed data processing/cleaning/ formatting, robust feature selection & leak detection, accurate hyperparameter optimization, and state-of-art predictive models.
A fully automated MLBox pipeline has three components — initialization, validation, and application. The tasks are further divided into three steps. In initialization, raw data goes through preprocessing, cleaning, and encoding. Then in validation, MLBox performs feature engineering & selection and model tuning. Lastly, in application, the whole pipeline is fitted to predict the target and perform model interpretation.
In a research study, Citigroup warned that the Ethereum blockchain’s planned Merge, an update that converts it from a proof-of-work (PoW) system to a more environmentally friendly proof-of-stake (PoS) mechanism, will have a variety of effects. The Merge is scheduled to usher the second-largest cryptocurrency Ethereum into its next phase (previously known as Ethereum 2.0) when the network’s whole blockchain migrates to a new structure that supporters claim would make it more efficient, sustainable, and scalable.
In order to verify transactions, and protect the network, proof of work relies on the participation of network users known as miners. Miners utilize powerful computers to solve challenging mathematical puzzles. In exchange for their work, miners are paid in the network’s native cryptocurrency, such as BTC or ETH. Proof-of-stake networks require validators who confirm transactions and maintain the network by staking—or locking up—a certain amount of the network’s native cryptocurrency, to work together instead of miners. Instead of using electricity to validate blocks, this solution requires users to put up raw cash. In contrast to a proof-of-work system, which requires 51% of the network’s power to overthrow, a proof-of-stake system requires 51% of the entire amount of staked ether. The network grows safer when more total ether is staked since it costs more to attain 51% of its capital.
Validators will also ensure network security by locking up the network’s native crypto, ETH. Potential block verifiers must stake 32 ETH by putting the money into the official deposit contract created by the Ethereum Foundation in order to become a complete validator on Ethereum 2.0.
It is important to note that for the owners of ETH, no special activity is required to “transfer” ETH from the 1.0 chain to the 2.0 chain. ETH holders can continue using (or hodling) their ETH just as they did before, but on a more secure and scalable chain than the one they were using earlier. Ethereum 1.0 will become a part of the 2.0 chain.
The staking will probably slow down the rate at which ETH’s supply grows. And for Ethereum investors, if supply growth slows as demand picks up, it may be advantageous. Therefore, it should be no surprise that the price of ETH has increased as the upgrade draws closer.
Over the past several days, Ethereum staked on the Beacon Chain, a PoS variant of Ethereum that debuted in December 2020, has been slowly rising. On December 1, 2020, 16,000 validators deposited the needed 32 ETH to stake. Since this time, Ethereum’s Proof-of-Work consensus layer has been coexisting with the Proof-of-Stake Beacon Chain.
The current mainnet and the PoS version are set to combine on September 19. Currently, more than 410,000 distinct validators have contributed to the total staked value of 13.28 million. The Merge, has also been largely viewed as a positive trigger for ETH. With the migration to Proof-of-Stake, the network will stop paying miners and just charge validators.
According to the bank’s research, once the update is implemented, it might result in decreased energy consumption and possibly pave the way for a more scalable future through sharding. Sharding is the process of dividing a large dataset into pieces that each represent the complete dataset. As a result of the Merge, the current Ethereum chain splits into 64 parallel shard chains.
Since each sharded chain will distribute the data storage functionality of the existing Ethereum chain, distributing the data processing weight among the nodes, sharding will significantly increase the network’s scalability; while relieving pressure on the core network. The multi-phased Sharding is expected to be implemented in 2023–2024, even though it was initially scheduled for 2022. When it is made available, sharding will be able to support Layer 2 rollups like zkSync and Immutable X, which handle large amounts of Ethereum transactions off-chain more quickly and cheaply.
Ethereum will be able to execute somewhere between 20,000 and 100,000 transactions per second thanks to sharding and Proof-of-Stake. The speed improvement from the current rate of 10–20 transactions per second is up to 999,900%, even though it might take a few years to achieve the maximum capacity. These exponential speed increases will assist in reducing network congestion and gas prices. The so-called “triple halvening” following the Merge will see fresh block rewards decline from 12,000 ETH (Ether) per day to 1,280 ETH, increasing scarcity, preventing the issuance inflation of the token, and probably maintaining the value of ETH.
Citi estimates that the switch to Proof-of-Stake could also reduce the annual issuance of Ethereum by 4.2%. Following the upgrade, it is predicted that ETH may turn into a deflationary asset because Ethereum also uses a part of its supply to pay for gas fees via EIP-1559. In line with the study, Ethereum’s importance as a store of value will increase as it turns into a deflationary asset.
The bank predicts that as a deflationary asset, it is less likely for Ethereum to be the blockchain with the maximum throughput. The report also stated that Ethereum’s blocktime will decrease from 13 to 12 seconds as a result of the Merge, which would result in lower fees and faster transaction times. Blocktime is the amount of time needed by crypto miners to authenticate or validate a block. For Ethereum, it typically takes between 10 and 20 seconds.
Furthermore, Citi stated that because Ethereum would be a PoS network, it would draw investment opportunities, enabling the application of new valuation methodologies currently unavailable for the blockchain. Citi also added due to the decline in energy usage, which would fall by 99.95% after the Merge, Ethereum may be deemed an energy-efficient and environmentally friendly cryptocurrency.
According to the report, The Merge, the first of five planned network enhancements, may only boost transaction speeds by 10% by cutting down on block times. The Surge (sharding) the network’s next scheduled update, is expected to enable 100,000 transactions per second (TPS) for the blockchain, according to the study. The Surge update will be followed by The Verge (introduction of verkle trees), The Purge (reducing hard drive space needed for validators), and The Splurge (miscellaneous minor upgrades) phases.
The sooner the Merge takes place, in the opinion of its proponents, the faster it will be able to address the two main Web 3.0 growth constraints. These include the need to accommodate the growing need for distributed/decentralized processing capacity while still doing so in an environmentally friendly manner, and a shortage of raw native blockchain development talent.
In an effort to move closer to implementing Ethereum Merge, the Ethereum developers tested a “Shadow Fork” on April 11. A test to replicate data from the Ethereum blockchain to a testnet is known as a shadow fork. The term “testnet” or test environment network refers to a different blockchain network that is used for testing and evaluation. Before transferring their findings to the main network, the engineers test the recently added features using Shadow Fork.
The developer claims that the Shadow fork test, which was conducted against both current testnets and a portion of the Ethereum blockchain, uncovered client implementation problems. Once clients function normally during shadow forks, the current Ethereum testnets are processed via The Merge. An appointment time is scheduled for the Ethereum mainnet upgrade after testnets have successfully upgraded and are stable.
Before the Ethereum blockchain successfully transitioned from proof-of-work to proof-of-stake, it underwent its third and final test environment network merging on August 10 under the moniker Goerli (after a Berlin railway station). In this instance, the Prater beacon chain and the Goerli testnet were combined. This experiment demonstrated the effectiveness of the merging process and demonstrated how the proof-of-stake validation procedure significantly decreases the amount of energy required to validate a block of transactions. The Sepolia testnet worked well last month, while the Ropsten testnet successfully switched to proof of stake in June.
Several methods exist for determining if a test was effective. The participation rate, or how many validators are online and doing their jobs, is the indicator that is the simplest to monitor. Those in charge of development must ascertain why in case the numbers decline.
Another metric is transactions that in Ethereum are processed in blocks. A certain sign that the test was successful would be if the blocks included genuine transactions rather than being empty. The final significant check is to see if the network is finalizing, which means that more than two-thirds of validators are online and agree on the chain history, which under typical network circumstances takes 15 minutes.
However, some people pointed out that a few small bugs persisted from the first two testnet mergers. Despite some “confusion on the network due to two different terminal blocks and plenty of non-updated nodes,” according to Ethereum developer Marius van der Wijden, things were still moving forward pretty well.
“Bellatrix” and “Paris” are the final two levels of Ethereum’s upgrading. Developers said during the conference call that “Bellatrix” will occur on September 6. When Ethereum’s hash rate (a metric of a network’s computational power) exceeds a specific threshold, the last phase of the upgrade, dubbed “Paris,” will take place. This is presently scheduled to occur on September 15.
Ethereum detractors have compared the integration to replacing an airplane’s engine in the middle of a passenger flight. Not just the aircraft but also the $183 billion worth of ether in circulation are in jeopardy.
The new blockchain may have several unanticipated flaws on a technological level. This is based on the fact that several total outages have occurred this year on the Solana blockchain, another proof-of-stake platform. At the same time, experts believe that the Merge and the creation of layer-2 networks built on top of Ethereum to help with layer-1 network scalability will boost Ethereum against other proof-of-stake competitors.
The security of proof of stake is another concern raised by critics. However, a feature called slashing allows validators to be punished for acting maliciously by having their staked ether destroyed and their network access terminated.
In conclusion, in spite of delays, with a competitive lead already established, qualified and deliberate measures toward the Merge will guarantee Ethereum remains competitive and energy-efficient for the imminent future.