arr-left
All blogposts
Company news
February 3, 2025

Bringing Stories to Life: Dalila Taleb, One of CGDream's Ambassador Program Winners, on AI Creativity and Workflow

This is some text inside of a div block.

Image Creation with CGDream

  1. Could you walk us through your process of creating images using CGDream?

    Creating an image or a series of images is often tied to a specific project. It could be a teaser for one of my novels, a chapter, or a video song I want to release. Typically, it starts with a scene unfolding in my mind, which I then bring to life by creating visuals that align with that vision and serve the project's purpose.
  1. Were there any specific features or tools in CGDream that stood out or helped you the most?

    The filters are an excellent tool that allows me to add various elements and effects I want to incorporate into my images. At first, I didn’t fully understand their utility, but as I learned how to use them effectively, they became a key component in achieving better results. Over time, experimenting with filters helped me enhance the overall quality of my work. For me, using a reference image, style, or character is incredibly useful. Features like selecting a specific character's image to maintain the same face across multiple generations are invaluable for ensuring visual consistency, which is especially important when working on larger creative projects.
  2. How did you balance creativity with the technical aspects of the software?

    While it might seem that all image-generation tools are the same and that a good prompt is all you need, that’s not entirely true. A single prompt can produce very different results depending on your tool. On one hand, it’s crucial to familiarize yourself with the software’s features and experiment with them. Through trial and error—refining prompts and testing repeatedly—I gained a better understanding of how to leverage the functionalities.

Video Creation and Editing

  1. How did you approach the process of transforming static images into engaging videos?

    At the beginning of 2024, I started generating images using MidJourney and DALL-E for educational purposes. I then used an AI avatar image in D-ID Studio to create videos, leveraging their lip synchronization technology by providing recorded speeches.

    In August, I had the idea of turning my novel Bébé à l’improviste into an audiobook. I wanted to create images to represent the main and supporting characters, giving them faces to better immerse readers. However, this wasn’t enough for me; I wanted to bring the characters to life through animation. I started making them move and talk, adding dedicated music, which allowed me to explore new creative dimensions. This marked the beginning of my journey into creating videos, music, and trailers. Although the project temporarily diverged from its initial goal, I have learned a lot over the past three months and I continue to improve daily.
  1. Could you explain the methods or steps you followed for video creation and editing?

Video Sequence Creation:

  • I start by selecting the images I plan to use for the video.
  • Using video generation applications, I create sequences, typically between 5 to 10 seconds long.
  • I import an image and write a prompt to describe the desired movement or action.
  • I adjust camera movements when necessary or leave the tool to generate results freely for testing.
  • Some tools, like Kling, allow me to extend an existing sequence. For instance, if I’m satisfied with an initial 5- or 10-second sequence, I can add another 5 seconds by using a new prompt.

Sequence Assembly:

  • Once I’ve compiled a satisfactory set of sequences, I feel ready to start editing the video.

Sound and Effects Addition:

  • Before editing, I determine whether music, narration, or both are needed.
  • Sometimes, I also need sound effects, which I create for the project.

Video Editing:

  • For now, I use tools like FlexClip and CapCut. However, with recent projects, I plan to transition to more advanced software such as Adobe Premiere Pro, despite its higher cost and the learning curve involved.
  • Often, during editing, I realize that a necessary element is missing. In such cases, I return to the creation process to generate the required component (images, video sequences, music).

Exporting and Sharing:

  • Once I’m satisfied with the final video, I export it in the optimal format and resolution to share on my social media platforms.

        3. Have you encountered challenges, and how did you overcome them?

    I faced two main challenges when creating animated videos from static images:

  1. Maintaining Visual Consistency in Outfits:

    When the same character needs to appear in multiple scenes wearing the same outfit, achieving uniform results can be challenging. Image generation tools often struggle to reproduce clothing or accessories consistently across different settings. To overcome this, I must be very precise with the outfit description. I then generate several variations of the character from the start and select the image where the outfit is almost identical. This allows me to adapt the scenes while maintaining visual uniformity.
  2. Creating Varied Facial Expressions:

    Although the character features maintain a consistent face, generating different facial expressions can be tricky. For example, if the image associated with the character's features shows a neutral expression, it can be difficult to make the character smile in the new generation. This process consumes many credits and requires repetitive generations to achieve the desired smile.

    To address this issue, I use a specific method:
  • I create a panel of 10 to 12 images representing the character, with a detailed prompt about the face, hair, eyes, etc. I specify that each image in the panel should reflect a different emotion or expression (such as frowning, thoughtful, laughing, joyful, sad, etc.).
  • Once I get the desired generation, I download the file and split each image into separate JPEG or PNG files for flexible use.
  • Depending on the project’s needs, I select the image that best matches the desired emotion.

    This approach allows me to maintain consistency while gaining more control over facial expressions. Of course, consistency in images is key for better videos.

Work with Music

  1. Could you describe your approach to incorporating music into your content?

    I carefully select music that matches the mood and message of my creations. Music serves as a powerful tool to amplify the emotions I want to convey. Sometimes, this emotion reflects my personal feelings, as seen in three songs I published on TikTok. However, more often, I compose music to capture the emotions of the protagonist in my novel, to present the story, or even to narrate a specific chapter through a song.

    To achieve this, I write the lyrics myself. As a French writer, I sometimes draft the poem in French and then translate it using a translation tool. Since I have a good understanding of English, I review the translated version, tweak the wording, and adapt the style to suit the song. Occasionally, I collaborate with ChatGPT for refinements; sometimes I use its suggestions, and other times I rely on my intuition.When I move to music generation tools, such as Suno—my current favorite—I often find myself revisiting and adjusting the lyrics. The first version is rarely the best, so I refine and align the tone, style, and mood of the song with the character’s perspective. This iterative process allows me to immerse myself in the character’s mindset to create an authentic and emotional connection.
  1. How did you choose the right tracks, and what tools or techniques did you use to sync music with visuals?

    I can generate up to 100 tracks to find the perfect one. This process involves iteratively improving the lyrics, musical style, and instruments to align with my vision. Often, the 5th, 20th, or even 50th version might finally be the one I select. I admit to being meticulous—perhaps even obsessive—about this process.

    To ensure I make the right choice, I distance myself from the music for a few days. During this break, I work on other projects or songs. When I return, I listen to all the tracks I’ve created. Of the 100 tracks, I usually abandon about 70% after just 10 to 30 seconds of listening, as they fail to resonate with me. Around 20% hold my interest until halfway, and the remaining 10% are the ones I listen to entirely.Ultimately, I choose the track that I can listen to repeatedly, the one that deeply resonates with me and perfectly reflects the intended emotion.

  2. What advice would you give others about enhancing videos with audio?

    Depending on the project, music can alternate between two roles:
  • Reinforcing the narrative or context of the story: This immerses the viewer in the world created by the video.
  • Serving as a narrative element itself: This approach is often seen in music videos where the song tells a story, complemented by the visuals.

    For example:
  • In one of my music videos based on my novel Unexpected Baby, the song narrates a chapter of the story, creating a cohesive experience between audio and visuals. I even created a video for this song (It is an amateur work and was my first video creation).
  • In another project—a submission for a competition showcasing AI filmmaking tools—I used instrumental music to enhance the atmosphere and support the narrative.

Personal Background and Inspiration

  1. Could you tell us more about your professional background and how it connects with your creative work?

    Initially, I graduated in software engineering. I began my career as a programmer, later becoming the team leader of developers. Afterward, I took on the role of IT Manager at a business school. I teach Python programming occasionally when I have time, as well as the metaverse, and train people on some AI tools. I also write technical content on topics such as the Metaverse, Web 3.0, and everything related to new technologies. This is my LinkedIn account.
  1. What are your hobbies and passions, and how do they influence your artistic style?

    I love reading and writing. I'm a writer on multiple platforms, having completed one long novel and almost finished a second, including rewrites. I also write song lyrics and, as you've noticed, I use Suno to compose my own music. My writer's side is complemented by my musical and video creations. I even plan to transform one of my short stories into a short film using AI.

    Additionally, I'm excited to revisit my work Unexpected Baby (Bébé à l’improviste), a novel I one day hope to see on the big screen—it’s such a beautiful story. This is the link to my profile on Inkitt and the story.

    If you read the summary and the prologue, you will get an idea about my style. You don’t need to have an account on Inkitt, and the browser translator will allow you to translate it into English—it takes less than 5 minutes to read.
  2. How has CGDream helped you express your creativity or reach your goals?

    CGDream has made it incredibly easy for me to create the images I need for my projects. It is easy to use, and I utilize it extensively across almost all my work. It has been an invaluable tool for bringing my creative ideas to life. CGDream has become an essential part of my process, helping me visualize my thoughts and bring them closer to reality.

Practical Advice for the Community

  1. What tips or best practices would you share with other CGDream users?

    Take the time to explore all the features, even those that may seem complex at first. Test prompts ranging from short to detailed or long, and also take a look at what the creator community is sharing on CGDream. You can learn from their prompts and use of filters, and sometimes you’ll discover a starting point that will save you time, effort, and money. By adjusting a prompt to suit your needs, you can streamline your process and achieve better results.
  2. Are there any hidden features or tricks you’ve discovered that others might find useful?

    One trick I’ve already mentioned involves creating a panel with multiple captures of the same character. Here, I’m sharing a prompt that can be adapted to suit the traits of the character you want to represent. For me, the following traits perfectly capture the protagonist of my short novel I Should Have!:

    Create a twelve-panel photo collage featuring a stunning blonde woman in her thirties, exuding newfound confidence. She has blue eyes and long hair. Each panel highlights a specific emotion and pose: laughing, smiling, crying, sadness, fear, happiness, anger, surprise, depression, joy, confusion, determination, frustration, and excitement. The background should remain a simple white, with even studio lighting focusing on the hair and face. Camera settings: high-resolution portrait lens.”

    This approach allows for a detailed and consistent depiction of your character’s emotions, which can be particularly useful for storytelling or visual development.
  1. Which features would you like to see in the future?

    The need for consistent characters is crucial, especially with all the new possibilities and perspectives emerging. AI influencers, AI-generated movies, AI music videos—all of these require consistency at some point.

    But not just for a single character. It would be a dream come true if, when I want to create an image depicting a scene with two or more characters, I could use two or three distinct characters, selecting them by their names.I understand that implementing this functionality won't be easy, as it would require allowing users to train a model on the images created from the panel, then refer to a specific character whenever they want to include it in an image. But the ultimate feature would be the ability to reference multiple characters. Yes, I would be thrilled to see such a feature come to life on CGDream.



    Thank you!

    Dalila TALEB

Make AI Realize Your Dreams