Pika Labs’ text-to-video AI platform opens to all: Here’s how to use it

Join leaders in San Francisco on January 10 for an exclusive night of networking, insights, and conversation. Request an invite here.

As AI continues to penetrate the creative space, creating value as well as concerns among many, six-month-old Pika Labs has announced its text-to-video AI platform, Pika 1.0, is now available for everyone.

Accessible via the web, Pika 1.0 allows users to generate and edit videos in diverse styles such as 3D animation, anime or cinematic – from simple text prompts. 

The move to launch the tool comes as the other players in the AI video space, including Stability AI and Runway, try to race ahead with their respective offerings to give businesses and individuals a way to easily create video content. Stability just recently launched its image-to-video offering on its developer platform.

What to expect from Pika 1.0?

Announced last month, Pika 1.0 comes with an easy-to-use conversational interface (similar to ChatGPT), where a user enters the idea of the video they envision. Once the prompt is entered, the underlying model will produce the results. 

VB Event

The AI Impact Tour

Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.


Learn More

Pika says that the model can produce a wide range of content, including 3D animations, live-action clips and cinematic videos, as well as modify moving objects (like a horse or an outfit) with simple text prompts. 

When we tested the platform, it did produce these results in about a minute or so, but the output was inconsistent on many occasions. The 3-second clips resulting from text prompts were at times blurred or out of place, with the subject being deformed or out of focus. Some results, however, were right on the mark, including this rottweiler wearing a Santa cap.

As the tool goes mainstream and Pika updates the model, we expect these gaps to be resolved. However, at this point, what we like in Pika 1.0 is the wide range of customizations the company offers. When producing a video from text, the tool provides a range of options to work with, including a way to adjust frames per second between 8 to 24 and the aspect ratio of the clip. Users can even adjust motion elements, including camera pan, tilt, zoom and the strength of motion.

Additionally, when a clip has been produced, users can fine-tune it further. The model offers options to re-generate with the same prompt, enter a new prompt or edit what was produced. The edit function allows you to modify a specific region of the clip (with new objects and props) and expand the canvas to a different aspect ratio. It can also add four more seconds to the produced clip or upscale its quality. 

The editing smarts of the tool are powered by the image-to-video and video-to-video capabilities of the model. If required, users can even upload their own photos/videos to bring them to life with these features. Imagine being able to convert a static meme into a cinematic clip.

How to get started with Pika 1.0?

To get started with video generation on Pika, users have to sign up via Google or Discord on the company’s dedicated web platform. After an account is created, the company enrolls the user on a waitlist. However, the wait is not that long.

Within minutes of enrolling, Pika Labs drops an email confirming full access to Pika 1.0.

“Start using it to create videos on command. Unlimited access is still free, so go wild,” the company notes in the email. Users can then launch the platform and start describing their stories for producing video content in varied styles.

“We know firsthand that making high-quality content is difficult and expensive, and we built Pika to give everyone, from home users to film professionals, the tools to bring high-quality video to life. Our vision is to enable anyone to be the director of their stories and to bring out the creator in all of us,” Demi Guo, the CEO of the company, said in a statement last month.

Growing competition for AI videos

So far, Pika has raised $55 million in funding at a valuation of nearly $200 million. However, the company is not alone in the AI video space. It competes with known and heavily funded players such as Adobe, Runway and Stability AI. 

Stability recently added its Stable Video Diffusion model to its AI platform for developers; Runway is already being used to add motion to memes driving virality while Adobe is experimenting with capabilities like video upscaling and object editing for its Creative Cloud products. It also reportedly acquired Rephrase AI to up its video-generation game.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Leave a Comment