WordPress Ad Banner

Pakistani Students Develop Revolutionary ‘Smart Mirror’ for Selfies, Music and Weather Updates

A group of talented Pakistani students have developed an innovative and impressive product that has caught the attention of tech enthusiasts worldwide. The revolutionary “Smart Mirror” is equipped with touch technology, making it incredibly user-friendly and accessible. The mirror offers an array of features that cater to modern-day consumers’ needs, including the ability to take pictures and print them out, play music, and provide weather updates.

One of the most exciting features of the Smart Mirror is its picture-taking capability. Users can easily snap selfies, edit them with filters, and even print them out for instant keepsakes. Additionally, they can access their photos by scanning the mirror’s QR code with their phone, making sharing memories with friends and family an absolute breeze.

The Smart Mirror is a testament to Pakistan’s growing technological capabilities and the potential of its young, innovative minds to make a significant impact on the global tech scene. With their cutting-edge technology and impressive skill set, Pakistani students have created a product that is not only functional but also incredibly fun to use. This impressive feat serves as a reminder of the power of human creativity and innovation, and we can’t wait to see what they’ll come up with next!

Chinese Experts Make Major Discovery in 6G Communication

6G, short for the sixth generation cellular network, is the next frontier of telecommunications which promises more reliable and faster communication than any of the existing technologies. 5G networks, which different parts of the globe are rolling out, offer low transmission latency delays. Experts predict that 6G networks will further lower latency delays and enable efficient use of the electromagnetic spectrum.

Researchers at the China Aerospace Science and Industry Corporation Second Institute have achieved a breakthrough in next-generation 6G communication by conducting the first real-time wireless transmission, the South China Morning Post reported.

What makes China’s achievement special?

Experts expect 6G cellular networks to enable high-definition virtual reality (VR), holographic communication, and other data-intensive applications. The researchers used a special antenna to generate four different beam patterns at 110 GHz frequency. Doing so enabled them to transmit data at 100 gigabits per second on a 10 GHz bandwidth, a significant upgrade from current levels.

The technology used for this real-time data transmission has been dubbed as terahertz orbital angular momentum communication, the SCMP said in its report.

Chinese researchers make a major breakthrough in 6G communication
6G will be a crucial tool of communication in the futureTony Studio/iStock 

Terahertz refers to communication in the frequency range of 100 GHz and 10 THz of the electromagnetic spectrum. The higher frequency range of this technology enables faster data transfer rates and more information to be transmitted. Terahertz communication has also attracted interest for use in military environments since it offers high-speed and secure communication.

The other significant part of their achievement is the orbital angular momentum (OAM) used in the transmission. This encoding technology allows more information to be transmitted at once. The researchers used OAM to transmit multiple signals on the same frequency demonstrating a more efficient use of the spectrum.

While it may take a few years for these technologies to become commonplace, the researchers also demonstrated advancements in wireless backhaul technology that can be deployed soon.

Conventional cellular networks transmit data from devices to base stations and then to core networks through fiber optic cables. However, with an expected increase in base stations, fiber-based transmission is anticipated to become more expensive and time-consuming. The researchers aim to provide flexibility at lower costs by using wireless technology for backhaul, which can also be used for existing 5G communication.

In the future, 6G communication technology will also be critical for short-range broadband transmissions such as lunar and Mars landers and spacecraft. The U.S. government has taken cognizance of advances made by the Chinese communication industry and looking for ways to advance the technology at home and reassert U.S. dominance in the area, the Wall Street Journal reported.

The First Transformable Nano-Scale Electronic Devices Are Finally Here

What if the nano-scale electronic parts in devices like smartphones could transform into other objects? University of California, Irvine physicists have now engineered versions of these types of devices that can do just that. They can be altered into many different shapes and sizes.

This is according to a press release by the institution published on Monday.

“What we discovered is that for a particular set of materials, you can make nano-scale electronic devices that aren’t stuck together,” said Javier Sanchez-Yamagishi, an assistant professor of physics and astronomy whose lab performed the new research. 

“The parts can move, and so that allows us to modify the size and shape of a device after it’s been made.”

The electronic devices are stuck on but can be reconfigured into any pattern a researcher can imagine.

“The significance of this research is that it demonstrates a new property that can be utilized in these materials that allows for fundamentally different types of devices architectures to be realized, including mechanically reconfigure parts of a circuit,” said Ian Sequeira, a Ph.D student in Sanchez-Yamagishi’s lab.

Up until now, scientists did not think such configurations were possible.

In fact, Sanchez-Yamagishi and his team weren’t even looking for what they ultimately discovered.

“It was definitely not what we were initially setting out to do,” said Sanchez-Yamagishi. “We expected everything to be static, but what happened was we were in the middle of trying to measure it, and we accidentally bumped into the device, and we saw that it moved.”

This is because tiny nano-scale gold wires can slide with very low friction on top of special crystals called “van der Waals materials.”

Slippery interfaces

Making the most of these slippery interfaces, they developed electronic devices made of single-atom-thick sheets of a substance called graphene attached to gold wires that can be transformed into a variety of different configurations easily.

However, the impact of the new devices still remains unclear.

“The initial story is more about the basic science of it, although it is an idea which could one day have an effect on industry,” said Sanchez-Yamagishi. “This germinates the idea of it.”

One area where it is certain to have an impact is quantum science research.

“It could fundamentally change how people do research in this field,” Sanchez-Yamagishi said.

“Researchers dream of having flexibility and control in their experiments, but there are a lot of restrictions when dealing with nanoscale materials,” he added in the press statement. “Our results show that what was once thought to be fixed and static can be made flexible and dynamic.”

The study is published in Science Advances.

A Computer Breakthrough Helps Solve a Complex Math Problem 1 Million Times Faster

Reservoir computing, a machine learning algorithm that mimics the workings of the human brain, is revolutionizing how scientists tackle the most complex data processing challenges, and now, researchers have discovered a new technique that can make it up to a million times faster on specific tasks while using far fewer computing resources with less data input.

With the next-generation technique, the researchers were able to solve a complex computing problem in less than a second on a desktop computer — and these overly complex problems, such as forecasting the evolution of dynamic systems like weather that change over time, are exactly why reservoir computing was developed in the early 2000s.

These systems can be extremely difficult to predict, with the “butterfly effect” being a well-known example. The concept, which is closely associated with the work of mathematician and meteorologist Edward Lorenz, essentially describes how a butterfly fluttering its wings can influence the weather weeks later. Reservoir computing is well-suited for learning such dynamic systems and can provide accurate projections of how they will behave in the future; however, the larger and more complex the system, more computing resources, a network of artificial neurons, and more time are required to obtain accurate forecasts.

However, researchers know only how reservoir computing works, not what goes on inside. The artificial neural networks in reservoir computing are constructed on mathematics, and it appears that all the system required to run more efficiently was a simplification. A team of researchers led by Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University, was able to do just that, dramatically reducing the need for computing resources and saving significant time.

When the concept was put to the test on a forecasting task, it was discovered that the next-generation reservoir computing technique was clearly superior to others, according to the study published in the journal Nature Communications.

Depending on the data, the new approach proved to be 33 to 163 times faster. However, when the work objective was changed to favor accuracy, the new model was 1 million times faster. This increase in speed was made possible by the fact that next-generation reservoir computing requires less warmup and training than previous generations.

“For our next-generation reservoir computing, there is almost no warming time needed,” explained Gauthier, in a press release. “Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points.”

Furthermore, the new technique was able to attain the same accuracy with only 28 neurons, as opposed to the 4,000 required by the current-generation model.

“What’s exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient,” Gauthier stated. And it looks like this is only the beginning. The researchers plan to test the super-efficient neural network against more difficult tasks in the future, expanding the work to even more complex computer issues, such as fluid dynamics forecasting.

“That’s an incredibly challenging problem to solve,” Gauthier said. “We want to see if we can speed up the process of solving that problem using our simplified model of reservoir computing.”

Scientists Discover New Circuit Element Called ‘meminductor’

A group of scientists has announced the discovery of a brand new circuit element known as the meminductor. 

Before we get into the new research led by Texas A&M University, a little background on the circuits is in order. 

Electrical circuits are ubiquitous in our daily lives but complicated to comprehend. Take a look around you; you can easily find examples of it. At the most basic level, switchboards assist us in turning on lights. Then there are more complex ones, such as those found in our cars and computers. 

However, the invention dates back 200 years (around 1800), and the fundamentals have remained largely unchanged since then. And this began to change in 2008.

The past developments

A circuit consists of three major elements that direct and control the flow of electricity through an electrical circuit. These elements are resistors, capacitors, and inductors. Each of these serves a different purpose, such as storing energy or restricting the flow of electricity. 

Scientists perceived that there was more to the world of circuitry. This curiosity led to the discovery of two new circuit elements in 2008 and 2019: the memristor and, later, the memcapacitor. This had a significant impact on the circuitry. 

“Those two discoveries set the world a little bit on its head as far as electrical engineering,” said H. Rusty Harris, one of the researchers of this new study, in a press release. 

These terms blend the word “memory” with resistor and capacitor. This is because their “current and voltage properties are dependent on previous values of current or voltage in time, like a memory.”

The discovery new circuit element 

The previous two discoveries caused researchers, including Harris and other researchers, to think. 

“All of the sudden, we thought we had three, but now we found these two others. And so that led us to think, ‘OK, there’s got to be more then, but how do we understand what they are? How do we map all of these things relative to each other?’ And it turns out, there is a relationship between each of the resistors and its family and each of the capacitors and its family,” added Harris.

The team experimented using a tool called a two-terminal passive system. This aided them in proving the presence of a new element called a meminductor. 

According to the statement, this circuit system primarily comprised an electromagnet and two permanent magnets. They investigated the density and strength of a magnetic field flux traveling through an inductor. 

The team confirmed the existence of a “mem” state (memory-like nature) within the inductor through this experimentation. The authors highlight that this development is “by the same definition that the memristor and memcapacitor were realized.”

The discovery of this new element could pave the way for developing the next generation of electronics.

The findings have been published in the journal Scientific Reports.

Study abstract:

The first intentional memristor was physically realized in 2008 and the memcapacitor in 2019, but the realization of a meminductor has not yet been conclusively reported. In this paper, the first physical evidence of meminductance is shown in a two-terminal passive system comprised primarily of an electromagnet interacting with a pair of permanent magnets. The role of series resistance as a parasitic component which obscures the identification of potential meminductive behavior in physical systems is discussed in detail. Understanding and removing parasitic resistance as a “resistive flux” is explored thoroughly, providing a methodology for extracting meminductance from such a system. The rationale behind the origin of meminductance is explained from a generalized perspective, providing the groundwork that indicates this particular element is a realization of a fundamental circuit element. The element realized herein is shown to bear the three required and necessary fingerprints of a meminductor, and its place on the periodic table of circuit elements is discussed by extending the genealogy of memristors to meminductors.

Waiting For Quantum Computers To Arrive, Software Engineers Get Creative

Quantum computers promise to be millions of times faster than today’s fastest supercomputers, potentially revolutionizing everything from medical research to the way people solve problems of climate change. The wait for these machines, though, has been long, despite the billions poured into them.

But the uncertainty and the dismal stock performance of publicly-listed quantum computer companies including Rigetti Computing Inc (RGTI.O) have not scared investors away. Some are turning to startups who are pivoting to using powerful chips to run quantum-inspired software on regular computers as they bide their time.

Lacking quantum computers that customers can use today to get an advantage over classical computers, these startups are developing a new breed of software inspired by algorithms used in quantum physics, a branch of science that studies the fundamental building blocks of nature.

Once too big for conventional computers, these algorithms are finally being put to work thanks to today’s powerful artificial intelligence chips, industry executives told Reuters.

QC Ware, a software startup that has raised more than $33 million and initially focused only on software that could run on quantum computers, said it needed to change tack and find a solution for clients today until the future quantum machines arrive.

So QC Ware CEO Matt Johnson said it turned to Nvidia Corp’s graphic processing units (GPU) to “figure out how can we get them something that is a big step change in performance … and build a bridge to quantum processing in the future.”

GPUs are microchips designed for processing video for gaming that have become so powerful that they now handle the bulk of AI computing. They are increasingly being used in quantum development as well.

This week, QC Ware is launching a software platform called Promethium, which is inspired by quantum computing and is designed to simulate chemical molecules on a regular computer using GPUs. The platform aims to investigate how molecules interact with things like proteins.

The software developed by QC Ware, called Promethium, has the potential to significantly reduce simulation time for molecules, according to Robert Parrish, the company’s head of quantum chemistry. The platform can cut simulation time from hours to minutes for molecules consisting of 100 atoms, and from months to hours for molecules with up to 2000 atoms. Prominent investors, including Eric Schmidt (formerly of Alphabet Inc.), T. Rowe Price, Samsung Ventures, and the venture arm of U.S. intelligence agencies In-Q-Tel, are backing quantum software startups, who are able to generate revenue from customers who are preparing for the arrival of quantum computing’s “iPhone” moment. However, these startups still face challenges in convincing some prospective clients, as the technology is still in its early stages of development. According to PitchBook, quantum software startups, such as SandBoxAQ, an Alphabet spinoff, have raised approximately $1 billion in the past 18 months.

SandBoxAQ CEO Jack Hidary said it was only 24 months ago that AI chips became powerful enough to simulate hundreds of thousands of chemical interactions simultaneously. It developed a quantum-inspired algorithm for biopharma simulation on Google’s AI chip called a Tensor Processing Unit (TPU) that generates revenue today. SandBoxAQ told Reuters in February it raised $500 million.

Jason Turner, who founded Entanglement Inc in 2017 to be a “quantum only lab,” became impatient with the slow pace of quantum hardware development. “It’s been ten years away for what, 40 years now, right?” he said. He finally relented, turning to Silicon Valley AI chip startup Groq to help him run a cybersecurity quantum-inspired algorithm. Ultimately, the software inspired by quantum physics won’t perform well on quantum computers without some changes, said William Hurley, boss of Austin-based quantum software startup Strangeworks.

Still, he said companies that start using them will have engineers “learning about quantum and the phenomenon and the process, which will better prepare them to use quantum computers at the point that they do so.” That moment could arrive suddenly, he said. Strangeworks, which also operates a cloud with over 60 quantum computers on it, raised $24 million last month from investors including IBM.

Apple scraps plans for 27-inch display with mini-LED

Apple has scrapped its plans to release a standalone 27-inch display with mini-LED and ProMotion technology, a new report says. This display was first rumored to launch sometime in the summer of 2022 but has faced a number of delays since then.

In a new post to subscription followers on Twitter, reliable analyst Ross Young says that Apple shipped “some panels” for this display last year. Since then, however, the company has reportedly “killed off” the display for the time being.

Although some panels were shipped last year, Apple killed off the 27″ MiniLED display, at least for now.

The first rumors on Apple’s higher-end Studio Display spinoff suggested it could be released in the summer of 2022. The display was then pushed back, with Apple allegedly targeting an October 2022 release. As it became increasingly clear that Apple wouldn’t make that October deadline, Ross Young then said that Apple was targeting a Q1 2023 release. 

Most recently, Young reported in February that the display had been hit with even more delays. He explained at the time that there wasn’t any evidence of mass production within Apple’s supply chain.

As it stands today, Apple sells two different external monitors, the Studio Display and the Pro Display XDR. 9to5Mac has reported that Apple is developing a new external display with a 7K resolution. The current Pro Display XDR has a 32-inch 6K (6016 x 3384) panel with 218 pixels per inch.

It remains to be seen what Apple’s plans for its external displays are at this point. Bloomberg did recently report, however, that Apple is prepping “multiple new external monitors” with Apple Silicon inside.

As Per Google: It’s AI Supercomputer Is Faster And Greener Than The Nvidia A100 Chip

Alphabet Inc’s Google on Tuesday released new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia Corp.

Google has designed its own custom chip called the Tensor Processing Unit, or TPU. It uses those chips for more than 90 per cent of the company’s work on artificial intelligence training, the process of feeding data through models to make them useful at tasks like responding to queries with human-like text or generating images.

The Google TPU is now in its fourth generation. Google on Tuesday published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines.

Improving these connections has become a key point of competition among companies that build AI supercomputers because so-called large language models that power technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, meaning they are far too large to store on a single chip.

The models must instead be split across thousands of chips, which must then work together for weeks or more to train the model. Google’s PaLM model – its largest publicly disclosed language model to date – was trained by splitting it across two of the 4,000-chip supercomputers over 50 days.

Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains.

“Circuit switching makes it easy to route around failed components,” Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. “This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model.”

While Google is only now releasing details about its supercomputer, it has been online inside the company since 2020 in a data centre in Mayes County, Oklahoma. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.

In the paper, Google said that for comparably sized systems, its supercomputer is up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia’s A100 chip that was on the market at the same time as the fourth-generation TPU.

Google said it did not compare its fourth-generation to Nvidia’s current flagship H100 chip because the H100 came to the market after Google’s chip and is made with newer technology.

Google hinted that it might be working on a new TPU that would compete with the Nvidia H100 but provided no details, with Jouppi telling Reuters that Google has “a healthy pipeline of future chips.”

Google to Shut Down Employee Laptops, Services, and Office Supplies for Multi-Year Cost Savings

In a rare companywide memo from CFO Ruth Porat, Google kicked off “multi-year” employee service cuts.

Google’s finance chief Ruth Porat recently said in a rare companywide email that the company is making cuts to employee services.

“These are big, multi-year efforts,” Porat said in a Friday email titled: “Our company-wide OKR on durable savings.” Elements of the email were previously reported by The Wall Street Journal.

In separate documents viewed by CNBC, Google said it’s cutting back on fitness classes, staplers, tape and the frequency of laptop replacements for employees.

One of the company’s important objectives for 2023 is to “deliver durable savings through improved velocity and efficiency.” Porat said in the email. “All PAs and Functions are working toward this,” she said, referring to product areas. OKR stands for objectives and key results.

The latest cost-cutting measures come as Alphabet-owned Google continues its most severe era of cost cuts in its almost two decades as a public company. The company said in January that it was eliminating 12,000 jobs, representing about 6% of its workforce, to reckon with slowing sales growth following record head count growth.

Cuts have shown up in other ways. The company declined to pay the remainder of laid-off employees’ maternity and medical leaves, CNBC previously reported.

In her recent email, Porat said the layoffs were “the hardest decisions we’ve had to make as a company.”

“This work is particularly vital because of our recent growth, the challenging economic environment, and our incredible investment opportunities to drive technology forward — particularly in AI,” Porat’s email said.

Porat referred to the year 2008 twice in her email.

“We’ve been here before,” the email stated. “Back in 2008, our expenses were growing faster than our revenue. We improved machine utilization, narrowed our real estate investments, tightened our belt on T&E budgets, cafes, micro kitchens and mobile phone usage, and removed the hybrid vehicle subsidiary.”

“Just as we did in 2008, we’ll be looking at data to identify other areas of spending that aren’t as effective as they should be, or that don’t scale at our size.”

In a statement to CNBC, a spokesperson said, “As we’ve publicly stated, we have a company goal to make durable savings through improved velocity and efficiency. As part of this, we’re making some practical changes to help us remain responsible stewards of our resources while continuing to offer industry-leading perks, benefits and amenities.”

Cutting down on desktop PCs and staplers

Among the equipment changes, Google is pausing refreshes for laptops, desktop PCs and monitors. It’s also “changing how often equipment is replaced,” according to internal documents viewed by CNBC.

Google employees who are not in engineering roles but require a new laptop will receive a Chromebook by default. Chromebooks are laptops made by Google and use a Google-based operating system called Chrome OS.

It’s a shift from the range of offerings, such as Apple MacBooks, that were previously available to employees. “It also provides the best opportunity across all of our managed devices to prevent external compromise,” one document about the laptop changes said.

An employee can no longer expense mobile phones if one is available internally, the document also stated. And employees will need director “or above” approval if they need an accessory that costs more than $1,000 and isn’t available internally.

Under a section titled “Desktops and Workstations,” the company said CloudTop, the company’s internal virtual workstation, will be “the default desktop” for Googlers.

In February, CNBC reported the company asked its cloud employees and partners to share desks by alternating days and are expected to transition to relying on CloudTop for their workstations.

Google employees have also noticed some more extreme cutbacks to office supplies in recent weeks. Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.

“We have been asked to pull all tape/dispensers throughout the building,” a San Francisco facility directive stated. “If you need a stapler or tape, the receptionist desk has them to borrow.”

A Google spokesperson said the internal message about staplers and tape was misinformed. “Staplers and tape continue to be provided to print stations. Any internal messages that claim otherwise are misinformed.″

‘We’ve baked too many muffins on a Monday’

Google’s also cutting some availability of employee services.

“We set a high bar for industry-leading perks, benefits and office amenities, and we will continue that into the future,” Porat’s email stated. “However, some programs need to evolve for how Google works today.”

“These are mostly minor adjustments,” stated a separate internal document from the company’s real estate and workplace team. The document said food, fitness, massage and transportation programs were designed for when Googlers were coming in five days a week.

“Now that most of us are in 3 days a week, we’ve noticed our supply/demand ratios are a bit out of sync: We’ve baked too many muffins on a Monday, seen GBuses run with just one passenger, and offered yoga classes on a Friday afternoon when folks are more likely to be working from home,” the document stated.

As a result, Google may close cafes on Mondays and Fridays and shut down some facilities that are “underutilized” due to hybrid schedules, the document states.

As a part of the January U.S. layoffs, the company let go of more than two dozen on-site massage therapists.

Read the full email from Ruth Porat here:

This year, one of our important company OKRs is to deliver durable savings through improved velocity and efficiency. All PAs and Functions are working towards this: Googlers have asked for more detail so we’re sharing more information below. This work is particularly vital because of our recent growth, the challenging economic environment, and our incredible investment opportunities to drive technology forward—particularly in AI.

We’ve been here before. Back in 2008, our expenses were growing faster than our revenue. We improved machine utilization, narrowed our real estate investments, tightened our belt on T&E budgets, cafes, Microkitchens and mobile phone usage, and removed the hybrid vehicle subsidy. Since then, we’ve continued to rebalance based on data about how programs and services are being used.

How we’re approaching this

The hardest decisions we’ve had to make as a company to reduce our workforce, and that is still being worked through in some countries. Most of the other large changes and savings won’t be visible to most Googlers but will make aa noticeable difference to our costs — think innovation in machine utilization for AI computing and reduced fragmentation of our tech stack. These are big-multi-year efforts. A few examples:

  • We are focused on distributing our compute workloads even more efficiently, getting more out of our servers and data centers. We’ve already made progress with these efforts and will continue to drive efficiencies – this work adds up given infrastructure is one of our largest areas of investment.
  • As we apply our efficient and well-tuned infrastructure and software to ML, we’re continuing to discover more scalable and efficient ways to train and serve models.
  • Improving external procurement is another area where data suggests significant savings – on everything from software to equipment to professional services. As one part of this, we’re piloting an improved buying hub that helps teams find suppliers that we’ve negotiated great rates with.
  • There are other areas we’ve spoken about that will make a big difference: we’re continuing to redeploy teams to higher priority work, to maintain a slower pace of hiring, to be responsible about our T&E spending, and to implement numerous suggestions from the Simplicity Sprint improve our execution and increase our velocity – particularly on prioritization, training, launch and business processes, internal tools and meeting spaces.

Changes to programs and services

We want to be upfront that there are also areas where we’ll realize savings that will impact some service Googlers use at work and beyond.

We set a high bar for industry-leading perks, benefits and office amenities, and will continue that into the future. However, some programs need to evolve for how Google works today. As well as helping to bring down costs, these changes will reduce food waste and be better for the environment.

  • We’re adjusting our office services to the new hybrid workweek. Cafes, Microkitchens and other facilities will be tailored to better match how and when they are being used. Decisions will be based on data. For example, where a cafe is seeing a significantly lower volume of use on certain days, we’ll close it on those days and put more focus instead on popular options that are close by. Similarly, we’ll consolidate microkitchens in buildings where we’re seeing more waste than value. We’ll also shift some fitness classes and shuttle schedules based on how they’re being used.
  • We’ve also assessed the equipment we provide Googlers. Today’s devices have a much longer lifespan and greater performance and reliability, so we have made changes to what’s available and how often it’s replaced—while making sure that people have what they need to perform their role. Because equipment is a significant expense for a company of our size, we’ll be able to save meaningfully here.

Just as we did in 2008, we’ll be looking at data to identify other areas of spending that aren’t asa they should be, or the don’t scale at our size. We will let Googlers know of any other changes that directly impact services they use. Our opportunities as a company are enormous. We have clear OKRs and substantial resources at our disposal to pursue them, but these resources are finite. Focusing on using them effectively makes a huge difference.

Nokia to Launch 4G internet is set to arrive on the Moon

According to a Nokia executive, the company is preparing to introduce 4G internet on the moon later this year as part of NASA’s Artemis 1 mission to establish a human foothold on the lunar surface. The objective is to demonstrate that terrestrial networks can fulfill the communication requirements of upcoming space expeditions.

Nokia is preparing to launch a 4G mobile network on the moon later this year, in the hopes of enhancing lunar discoveries — and eventually paving the path for human presence on the satellite planet.

The Finnish telecommunications group plans to launch the network on a SpaceX rocket over the coming months, Luis Maestro Ruiz De Temino, Nokia’s principal engineer, told reporters earlier this month at the Mobile World Congress trade show in Barcelona.

The network will be powered by an antenna-equipped base station stored in a Nova-C lunar lander designed by U.S. space firm Intuitive Machines, as well as by an accompanying solar-powered rover.

An LTE connection will be established between the lander and the rover.

The infrastructure will land on the Shackleton crater, which lies along the southern limb of the moon.

Nokia says the technology is designed to withstand the extreme conditions of space.

The network will be used within Nasa’s Artemis 1 mission, which aims to send the first human astronauts to walk on the moon’s surface since 1972.

The aim is to show that terrestrial networks can meet the communications needs for future space missions, Nokia said, adding that its network will allow astronauts to communicate with each other and with mission control, as well as to control the rover remotely and stream real-time video and telemetry data back to Earth.

The lander will launch via a SpaceX rocket, according to Maestro Ruiz De Temino. He explained that the rocket won’t take the lander all the way to the moon’s surface — it has a propulsion system in place to complete the journey.

Anshel Sag, principal analyst at Moor Insights & Strategy, said that 2023 was an “optimistic target” for the launch of Nokia’s equipment.

“If the hardware is ready and validated as it seems to be, there is a good chance they could launch in 2023 as long as their launch partner of choice doesn’t have any setbacks or delays,” Sag told CNBC via email. 

Nokia previously said that its lunar network will “provide critical communication capabilities for many different data transmission applications, including vital command and control functions, remote control of lunar rovers, real-time navigation and streaming of high definition video.”

Lunar ice

One of the things Nokia is hoping to achieve with its lunar network is finding ice on the moon. Much of the moon’s surface is now dry, but recent unmanned missions to the moon have yielded discoveries of ice remnants trapped in sheltered craters around the poles.

Such water could be treated and used for drinking, broken up into hydrogen and oxygen for use as rocket fuel, or separated to provide breathable oxygen to astronauts.

“I could see this being used by future expeditions to continue to explore the moon since this really seems like a major test of the capabilities before starting to use it commercially for additional exploration and potential future mining operations,” Sag told CNBC.

“Mining requires a lot of infrastructure to be in place and having the right data about where certain resources are located.

We’ll need more than just internet connectivity, if we’re ever to live on the moon. Engineering giant Rolls-Royce, for example, is working on a nuclear reactor to provide power to future lunar inhabitants and explorers.