WordPress Ad Banner

WhatsApp Introduces Innovative Local Data Transfer Method Using QR Codes

WhatsApp has recently revealed an efficient and expedited approach to transferring chats from an old phone to a new device through a QR codes-based method.

The company announced that users who switch to a different phone with the same operating system will now have the option to transfer their WhatsApp data utilizing a local Wi-Fi connection.

To successfully transfer your chat history, ensure that both devices are powered on and connected to the same Wi-Fi network. Follow these steps:

  1. Launch WhatsApp on your old device and navigate to Settings > Chats > Chat transfer.
  2. Upon completion, a QR code will be displayed.
  3. Scan the QR code using the new phone to finalize the transfer process.

WhatsApp emphasizes that this method provides enhanced security compared to third-party solutions, as the data is encrypted and solely exchanged between the two devices within your local network.

Until now, WhatsApp relied on cloud backups to facilitate data transfer between devices operating on the same iOS or Android platform. This marks the company’s first introduction of a local transfer method for such migrations.

WhatsApp already supports migration between iOS and Android devices. However, both methods are relatively more intricate compared to the simplicity of scanning a QR code.

Additionally, the chat app offers a multi-device feature for a single account, enabling message synchronization across various devices linked to the same phone number. In April, the company enhanced this feature to accommodate multiple phones as well.

YouTube Implements Stricter Guidelines for Fan Channels to Prevent Impersonation

YouTube is introducing updated guidelines on impersonation, requiring fan channels to explicitly indicate in their channel name or handle that they are not affiliated with the represented company or artist. These changes will take effect on August 21, 2023.

Under the new guidelines, YouTube will not permit channels that claim to be fan accounts but actually pose as other channels by reuploading their content. Additionally, channels using the same name and avatar or banner as another channel, with minor alterations like inserting a space or substituting a letter with a zero, will also be disallowed.

Furthermore, users will be prohibited from setting up channels using the name and image of another person and then posting comments on other channels as if they were made by that individual. Similarly, channels claiming to be fan accounts in their description but failing to clearly state it in their channel name or handle will be in violation of the policy.

In a blog post, YouTube stated, “This update will help genuine fan channels know exactly how you can celebrate your favorite creators, while also protecting original creators from content and channels that are impersonating them.” The company aims to ensure that viewers are not misled by the channels they engage with and follow, while creators are safeguarded against the unauthorized use of their name and likeness.

Previously, YouTube did not have strict guidelines specifically addressing fan accounts, simply stating that impersonation channels were not allowed on the platform. With the new policy, fan accounts must explicitly declare their nature as fan channels to avoid potential channel deletion. The updated guidelines aim to eliminate confusion regarding the “source of goods and services advertised” and prevent channels from engaging in malicious practices.

By implementing these stricter guidelines, YouTube seeks to strike a balance between allowing genuine fan channels to thrive and protecting original creators from impersonation and misleading content.

Meta to Pull news from Facebook and Instagram in Canada

In response to the recently approved Online News Act in Canada, Meta, formerly known as Facebook, has announced its plans to discontinue access to news on Facebook and Instagram for Canadian users. The legislation, known as Bill C-18, requires internet giants to negotiate compensation agreements with news publishers for the use of their content, including posting or linking to it.

In a blog post, Meta stated, “We are confirming that news availability will be ended on Facebook and Instagram for all users in Canada prior to the Online News Act (Bill C-18) taking effect.” The company has been vocal about its stance on the legislation since it was first proposed in 2021. Last year, Meta threatened to block the sharing of Canadian news content unless the government made amendments to the legislation. Earlier this month, Meta began blocking news on its platforms for some Canadian users. With the bill approved by the Senate and awaiting royal assent, which is considered a formality, Meta is prepared to follow through on its previous warnings.

Canadian Heritage Minister Pablo Rodriguez expressed his disagreement with Meta’s decision in a tweet, emphasizing that Meta is currently not obligated to comply with the act. Rodriguez stated, “Facebook knows very well that they have no obligations under the act right now,” and questioned who would stand up for Canadians against tech giants if the government does not.

It is noteworthy that Meta is not the only internet giant expressing discontent with the legislation. Earlier this year, Google conducted tests that restricted access to news content for some Canadian users. The company stated its efforts to find a mutually agreeable solution, expressing a desire to avoid an unfavorable outcome. Google has proposed various solutions throughout the process, seeking to address concerns and facilitate increased investments in the Canadian news ecosystem. However, the company claims that none of their concerns have been addressed, and they consider Bill C-18 to be unworkable. Google remains committed to urgently collaborating with the government to find a way forward.

Canada’s legislation bears similarities to a law passed in Australia in 2021. Meta had temporarily removed news content from the platform in Australia following the law’s passage but eventually restored it after the Australian government amended the legislation to allow for extended negotiation time between the platform and publishers.

WhatsApp Introduces Auto Silence Call Feature for Unknown Numbers

In an effort to tackle the rising issue of spam calls reported by its massive user base in India, WhatsApp has announced a new feature that allows users to automatically silence calls from unknown numbers. The implementation of this feature was unveiled during an announcement made by Mark Zuckerberg on Tuesday, where he emphasized the importance of privacy and control for WhatsApp users.

How to Use Auto Silence Call Feature:

WhatsApp’s call silencing feature provides an effective solution to the persistent problem of spam calls. By enabling the “Silence unknown caller” option in the Settings > Privacy > Calls menu, users can automatically mute incoming calls from unfamiliar contacts. This empowers users to regain control over their call experience and ensures enhanced privacy by filtering out unwanted interruptions.

Retaining Call Records and Notifications:

While incoming calls from unknown numbers will be silenced, WhatsApp still allows users to access the relevant information at their convenience. Notifications and call records from unidentified contacts will continue to appear in the app, enabling users to review them later if necessary. This feature caters to situations where the caller might be someone known to the user, but whose number has not been saved in their contacts.

Addressing Privacy Concerns:

With this new update, WhatsApp reaffirms its commitment to user privacy. By giving users the ability to selectively silence calls from unknown numbers, the app strikes a balance between safeguarding privacy and maintaining accessibility. This feature ensures that users no longer have to tolerate incessant spam calls while still having the freedom to stay connected with the people who matter to them.

English WhatsApp Privacy Bundle Silence Unknown Callers
Image Source: Google

Importance of Privacy Checkup:

India, with its vast WhatsApp user base exceeding 500 million people, has been disproportionately affected by the issue of spam calls. The implementation of the call silencing feature demonstrates WhatsApp’s commitment to addressing the specific concerns of Indian users and prioritizing their privacy and convenience.

As the menace of spam calls continues to plague smartphone users, WhatsApp’s new call silencing feature brings a much-needed respite. By providing an efficient method to manage incoming calls from unknown numbers, WhatsApp empowers users to reclaim control over their communication experience, safeguard their privacy, and maintain uninterrupted connections with their contacts.

Meta Unveils Voicebox: Next-Gen Voice Synthesis Model

Meta Platform’s AI research division has unveiled Voicebox, a groundbreaking machine learning model capable of generating speech from text. Unlike traditional text-to-speech models, Voicebox showcases remarkable versatility by effortlessly tackling various tasks, including editing, noise removal, and style transfer, even without specific training.

The model’s training process employed a unique methodology devised by Meta researchers. Although Meta has refrained from releasing Voicebox to address ethical apprehensions regarding potential misuse, the initial findings are highly encouraging and hold tremendous potential for a wide range of applications in the times ahead.

Voicebox is a generative model that can synthesize speech across six languages, including English, French, Spanish, German, Polish, and Portuguese. Like large language models, it has been trained on a very general task that can be used for many applications. But while LLMs try to learn the statistical regularities of words and text sequences, Voicebox has been trained to learn the patterns that map voice audio samples to their transcripts. 

Such a model can then be applied to many downstream tasks with little or no fine-tuning. “The goal is to build a single model that can perform many text-guided speech generation tasks through in-context learning,” Meta’s researchers write in their paper (PDF) describing the technical details of Voicebox.

The model was trained Meta’s “Flow Matching” technique, which is more efficient and generalizable than diffusion-based learning methods used in other generative models. The technique enables Voicebox to “learn from varied speech data without those variations having to be carefully labeled.” Without the need for manual labeling, the researchers were able to train Voicebox on 50,000 hours of speech and transcripts from audiobooks.

The model uses “text-guided speech infilling” as its training goal, which means it must predict a segment of speech given its surrounding audio and the complete text transcript. Basically, it means that during training, the model is provided with an audio sample and its corresponding text. Parts of the audio are then masked and the model tries to generate the masked part using the surrounding audio and the transcript as context. By doing this over and over, the model learns to generate natural-sounding speech from text in a generalizable way.

Replicating voices across languages, editing out mistakes in speech, and more

Unlike generative models that are trained for a specific application, Voicebox can perform many tasks that it has not been trained for. For example, the model can use a two-second voice sample to generate speech for new text. Meta says this capability can be used to bring speech to people who are unable to speak or customize the voices of non-playable game characters and virtual assistants.

Voicebox also performs style transfer in different ways. For example, you can provide the model with two audio and text samples. It will use the first audio sample as style reference and modify the second one to match the voice and tone of the reference. Interestingly, the model can do the same thing across different languages, which could be used to “help people communicate in a natural, authentic way — even if they don’t speak the same languages.”

The model can also do a variety of editing tasks. For example, if a dog barks in the background while you’re recording your voice, you can provide the audio and transcript to Voicebox and mask out the segment with the background noise. The model will use the transcript to generate the missing portion of the audio without the background noise. 

The same technique can be used to edit speech. For example, if you have misspoken a word, you can mask that portion of the audio sample and pass it to Voicebox along with a transcript of the edited text. The model will generate the missing part with the new text in a way that matches the surrounding voice and tone.

One of the interesting applications of Voicebox is voice sampling. The model can generate various speech samples from a single text sequence. This capability can be used to generate synthetic data to train other speech processing models. “Our results show that speech recognition models trained on Voicebox-generated synthetic speech perform almost as well as models trained on real speech, with 1 percent error rate degradation as opposed to 45 to 70 percent degradation with synthetic speech from previous text-to-speech models,” Meta writes.

Voicebox has limits too. Since it has been trained on audiobook data, it does not transfer well to conversational speech that is casual and contains non-verbal sounds. It also doesn’t provide full control over different attributes of the generated speech, such as voice style, tone, emotion, and acoustic condition. The Meta research team is exploring techniques to overcome these limitations in the future.

Meta Unveils I-JEPA: A Breakthrough Self-Supervised Learning Model for World Understanding

Meta, under the guidance of its chief AI scientist Yann LeCun, has achieved a significant milestone in the development of deep learning systems capable of learning world models with minimal human intervention. The company has recently released the inaugural version of I-JEPA, a cutting-edge machine learning (ML) model that acquires abstract representations of the world through self-supervised learning on images.

Early evaluations have demonstrated I-JEPA’s exceptional performance across various computer vision tasks. Moreover, the model exhibits remarkable efficiency, demanding just a fraction of the computing resources required by other state-of-the-art models during training. In a testament to their commitment to fostering collaboration and advancement, Meta has made the training code and model open source, and they are set to showcase I-JEPA at the prestigious Conference on Computer Vision and Pattern Recognition (CVPR) next week.

The launch of I-JEPA marks a significant step toward the realization of LeCun’s long-standing vision. By leveraging self-supervised learning on vast amounts of unlabeled data, I-JEPA autonomously learns abstract representations of the world, gradually developing a deep understanding of its intricacies. This capability holds tremendous potential for advancing the field of computer vision and revolutionizing various domains that heavily rely on visual data analysis.

Early tests have demonstrated I-JEPA’s prowess across a range of computer vision tasks, showcasing its ability to extract meaningful insights from complex images. Whether it’s object recognition, scene understanding, or image generation, the model consistently delivers impressive results, surpassing existing benchmarks. The breakthrough lies not only in its performance but also in its efficiency. I-JEPA significantly reduces the computational burden, requiring just a fraction of the resources consumed by contemporary models during training. This efficiency paves the way for accelerated research, wider adoption, and more accessible development of advanced computer vision systems.

Meta’s commitment to open collaboration and knowledge sharing is evident in their decision to open-source the training code and model for I-JEPA. By making these resources freely available to the research and development community, Meta encourages innovation and collaboration, fostering a collective effort to push the boundaries of computer vision. This move is expected to facilitate further advancements, as researchers and practitioners can build upon the foundation laid by I-JEPA, unlocking new possibilities and fueling breakthroughs in various real-world applications.

The upcoming presentation of I-JEPA at the renowned CVPR conference highlights the significance of this achievement within the computer vision community. It serves as a platform for Meta to showcase the potential of their self-supervised learning model, garner feedback from experts, and inspire further research and exploration. By sharing their findings and engaging with the community, Meta aims to stimulate dialogue, collaboration, and collective progress in the pursuit of more intelligent and capable computer vision systems.

In conclusion, Meta’s release of I-JEPA represents a significant advancement in the realm of deep learning and computer vision. The model’s ability to learn abstract representations of the world through self-supervised learning on images heralds a new era of autonomous knowledge acquisition. With exceptional performance across computer vision tasks and impressive computational efficiency, I-JEPA opens doors to enhanced visual understanding and analysis. By open-sourcing the training code and model, Meta invites collaboration and aims to accelerate advancements in the field. As I-JEPA takes the stage at CVPR, the excitement and anticipation within the computer vision community are palpable, underscoring the transformative potential of this groundbreaking achievement.

Meta Confirms Launch of New Social Media Platform Threads to Compete with Twitter

Meta, has officially confirmed the rumors circulating about their plans to launch a new social media platform named Threads to compete with Twitter, could be released as early as the end of June.

During a companywide meeting, Meta’s chief product officer, Chris Cox, showcased several screenshots of the upcoming app, revealing its close integration with Instagram. Threads will utilize Instagram’s account system to populate users’ information, allowing them to sign up and log in using their Instagram credentials.

Internal documents obtained by The Verge suggest that the project was internally referred to as “Project 92.” The chosen name, Threads, indicates a focus on Twitter-style, enabling users to provide additional context through a series of connected posts.

Notably, Threads will also feature compatibility with other social media platforms such as Mastodon and Bluesky, further expanding its reach and potential user base.

With Meta’s entry into the competitive social media landscape, alongside the rising popularity of alternative platforms like Bluesky and Mastodon, the launch of Threads is poised to introduce a new player in the realm of online social networking.

The Elon Musk factor

Ever since Musk took over Twitter and made it a private company, many celebrities and prominent figures have left the app. The reasons for their exodus have ranged from less oversight to an increase in hate speech. Elton John left Twitter in December, saying that Twitter is allowing “misinformation to flourish unchecked.” Jim Carrey, who had 19 million followers on the app, deactivated his profile as well. Others who followed were Whoopi Goldberg, Shonda Rhimes, Gigi Hadid, and Jameela Jamil.

In a direct jibe at Twitter CEO Elon Musk, Cox said, “We’ve been hearing from creators and public figures who are interested in having a platform that is sanely run, that they believe that they can trust and rely upon for distribution.” With a focus on “safety, ease of use, reliability,” Meta wants to make sure that users have a “stable place to build and grow their audiences,” he added.

Meta has been meeting with content creators and public figures ahead of launching the platform. Cox revealed that Meta is in talks with the likes of Oprah Winfrey, Dalai Lama, and DJ Slime to convince them to use the app.

New Screen Sharing Feature Added to WhatsApp Beta for Android Users

WhatsApp, the popular instant messaging platform owned by Meta, has announced an exciting new feature called “screen sharing” for its beta users on Android. This addition aims to enhance the user experience and provide a more effective way of communication. With screen sharing, users can now share their screens with others, allowing them to view documents, photos, or any other content on their device in real-time.

How to Initiate Screen Sharing Feature

To initiate screen sharing, beta users need to follow a simple process. In a chat, they can tap on the attachment icon and select the “Screen Share” option. Once activated, the recipient will be able to see the sender’s screen, enabling them to have a synchronized viewing experience. This feature opens up possibilities for collaboration, troubleshooting, or simply sharing content with friends and family.

Enhancing Communication and Collaboration

The introduction of screen sharing on WhatsApp provides a valuable tool for communication and collaboration. Users can now seamlessly share their screens during video calls or voice chats, making it easier to discuss and understand visual information. Whether it’s reviewing a presentation, going through a document together, or sharing memorable moments captured in photos, screen sharing enriches the overall conversation experience.

Exclusive Availability for WhatsApp Beta Users on Android

Currently, the screen sharing feature is exclusively available for WhatsApp beta users on Android. Beta testing allows developers to gather feedback and identify any potential issues before releasing the feature to the wider public. This phased rollout ensures a smoother and more refined experience when the feature becomes available to all WhatsApp users.

whatsapp ekran paylasma ozelligi 1

WhatsApp Introduces Message Editing Capability

In addition to the screen sharing feature, WhatsApp has also introduced a new capability that allows users to edit messages they have already sent. This feature addresses the common scenario of sending a message and realizing an error or wanting to clarify something shortly after.

How to Edit Sent Messages

To make changes to a sent message, users can simply long-press on the desired message and choose the “Edit” option from the menu that appears. It’s important to note that text editing is only available within the first 15 minutes after sending the message. After this time window, the message becomes uneditable.

Ensuring Transparency and Preventing Miscommunication

When a message is edited, it is clearly labeled as “Edited” when viewed by others. This labeling feature promotes transparency, ensuring that recipients are aware that changes have been made to the original message. By providing this information, WhatsApp aims to prevent any secretive actions or miscommunication that may arise from edited messages. However, the edit history itself remains private and will not be visible to others.

The introduction of screen sharing and message editing features on WhatsApp reflects the platform’s commitment to continually enhancing user experience and facilitating effective communication. Screen sharing allows users to share their screens in real-time, enabling synchronized viewing of documents, photos, and more. The message editing capability addresses the need for quick corrections or clarifications after a message has been sent, promoting transparency and preventing miscommunication. With these new features, WhatsApp aims to provide a versatile and reliable platform for users to connect, collaborate, and communicate seamlessly.

Here’s How to Recover Lost Snapchat Streak

Losing a Snapstreak on Snapchat can be disappointing, especially if it was a long-standing streak. However, Snapchat offers a feature called Snapstreak Restore that allows you to recover a lost streak. It’s important to note that while Snapchat provides the tools for restoration, the Snapchat support team cannot restore the streak for you. In this article, we will guide you through the steps to recover your lost Snapchat streak and continue your streaks with friends.

Step 1: Update Your Snapchat App:

Before attempting to restore your Snapstreak, ensure that you have the latest version of the Snapchat app installed on your device. Keeping your app updated is crucial to access all the latest features and functionalities, including Snapstreak Restore.

Step 2: Access Your Chat Feed:

Launch the Snapchat app on your device and navigate to your Chat feed. This is where you can find all your ongoing conversations with friends.

Step 3: Look for the Snap streak Restore Button:

If you have recently lost a streak with a friend, you will notice a button next to their name that resembles a camera. This button indicates that you have the option to restore the lost Snapstreak.

Step 4: Tap the ‘Restore’ Button:

Tap on the ‘Restore’ button next to the friend’s name with whom you lost the Snapstreak. This action will open the Reply camera, allowing you to initiate the Snapstreak restoration process.

Step 5: Send a Snap to Your Friend:

To proceed with the restoration, take a Snap, which can be either a photo or a video, using the Reply camera. Once you have captured the Snap, send it to your friend.

Step 6: Follow the On-Screen Instructions:

After sending the Snap, follow the instructions provided on the screen to complete the Snapstreak restoration process. These instructions may vary based on the specific requirements for restoring your streak. It is important to carefully read and follow each step to ensure a successful restoration.

Snapchat
Image source: Google

Additional Tips to Maintain Snap streaks:

While Snapstreak Restore can help recover lost streaks, it’s equally important to take preventive measures to maintain your streaks. Here are some additional tips:

  1. Regular Communication: Stay in touch with your friends on Snapchat by sending Snaps and engaging in chats. Consistent communication is essential to maintain Snapstreaks.
  2. Set Reminders: If you’re worried about forgetting to send a Snap within the 24-hour timeframe, consider setting reminders or alarms to prompt you to send a Snap to your streak friends.
  3. Mutual Commitment: Talk to your streak friends and establish a mutual commitment to maintain the streak. This way, both parties will be equally invested and dedicated to preserving the Snapstreak.
  4. Be Mindful of Time Zones: If you have friends in different time zones, be mindful of the time difference when sending Snaps. Adjust your schedule accordingly to ensure timely communication.

Conclusion:

Losing a Snapchat streak can be disheartening, but with the Snap streak Restore feature, you have the opportunity to recover your lost streaks. By following the steps outlined in this article, you can initiate the restoration process and continue your streaks with friends. Remember to keep your Snapchat app updated and maintain regular communication with your streak friends to prevent future streak losses. Happy snapping!

Company Offers $100 an Hour to Watch TikTok Videos for 10 Hours

TikTok users in the US have a chance to earn while scrolling through the short-form video app. An influencer marketing agency Ubiquitous is looking to pay three people $100 per hour to go on a 10-hour TikTok watching session.

In a world where social media platforms dominate our daily lives, one company has come up with an extraordinary proposition: get paid a generous $100 per hour to watch TikTok videos for a total of 10 hours. This enticing offer has caught the attention of many individuals seeking unique ways to earn money while indulging in their favorite online pastime. In this article, we will delve into the details of this opportunity and explore the implications it has for both avid TikTok users and the wider job market.

Unleashing the Potential of TikTok:

TikTok, the wildly popular video-sharing platform, has taken the digital world by storm. With its engaging content, creative challenges, and vast user base, TikTok has become a global phenomenon. Recognizing the immense popularity of the platform, the company offering this unique opportunity aims to harness the potential of TikTok by leveraging the audience’s interest to their advantage.

The $100 an Hour Proposition:

image 188

The idea of getting paid handsomely to watch TikTok videos for 10 hours may sound like a dream job for many. However, it’s important to understand the specifics of this proposition. The company behind this offer typically conducts market research and data analysis to gain insights into user behavior and preferences on TikTok. By having individuals watch and rate specific videos, they gather valuable information that can be utilized by businesses and marketers.

Job Requirements and Responsibilities:

While the prospect of earning $100 an hour sounds enticing, it’s crucial to consider the requirements and responsibilities associated with the position. Candidates for this unique job must possess a genuine interest in TikTok and be willing to dedicate 10 hours of their time to watch and evaluate various videos. Attention to detail, critical thinking, and the ability to provide constructive feedback are essential skills for this role. It is important to note that this opportunity may be available for a limited time or to a select number of participants.

Implications for the Job Market:

The emergence of such opportunities raises questions about the changing landscape of the job market and the potential for unconventional roles. As technology continues to advance, traditional employment structures are being challenged. The rise of the gig economy and remote work have paved the way for innovative job offerings that cater to people’s interests and lifestyles. This TikTok-watching opportunity serves as a prime example of how companies are capitalizing on popular trends to gather valuable data while simultaneously providing income-generating opportunities for individuals.

The Influence of User-Generated Content:

User-generated content has become a significant driving force behind modern marketing strategies. Platforms like TikTok allow users to create and share content that resonates with a broad audience. By harnessing the power of user-generated content, businesses can gain insights into consumer preferences and trends, enabling them to make informed decisions and tailor their products or services accordingly. The opportunity to watch TikTok videos for a substantial hourly wage highlights the value placed on user-generated content and the pivotal role it plays in shaping marketing strategies.

The allure of being paid $100 an hour to watch TikTok videos for 10 hours undoubtedly captivates the attention of many individuals. This opportunity not only provides a source of income but also sheds light on the evolving job market and the increasing demand for unconventional roles.

As user-generated content continues to shape the digital landscape, companies are tapping into popular trends to gather valuable insights and offer unique employment opportunities. Whether this opportunity remains a short-term venture or paves the way for similar positions, it represents a fascinating intersection between technology, market research, and individual interests. So, if you’re a TikTok enthusiast looking to earn some extra cash, this may just be the opportunity you’ve been waiting for.