Guides to Neural Networks for Portraits and Digital Art

No time to read?
Get a summary

Guides to Neural Networks

  1. Neural Networks: The best software to enhance or create images and videos
  2. How to take portraits using the Stable Diffusion Neural Network
  3. The VGTimes editors draw art using a neural network – look what we’ve got

Why we chose the Stable Diffusion Neural Network

The software blends user friendliness with solid performance and a wide range of experimental options. It is shareware, so access is generous, though the number of generations is limited. When the limit is reached, a brief pause allows continuing after a short wait.

A capable graphics card with 8 GB of video memory is helpful, but not strictly required. The guides below include a workflow that works on a standard computer, a smartphone, or a tablet, helping you get portrait results without needing top-tier hardware.

Preparations for taking a portrait in stable diffusion

To create a portrait you will need:

  1. Google account and about 5 GB of free space on a virtual drive.
  2. Photos of the face from which the portrait will be drawn. More photos improve results, but ten images are typically sufficient. Capture from different angles with good lighting. For best consistency, a single photoshoot is ideal so hairstyles and makeup stay the same. Each photo should be adjusted to a resolution of 512 by 512 pixels using tools like Photoshop, Paint.NET, or an online converter.

How to install Stable Diffusion

Set up an account on the Hugging Face platform.

Open Settings in the top right corner, click the account icon, and select the appropriate option. In Settings, locate Access Tokens and choose New Token. Name the token something like Dream Booth; assign a role as needed. Keep this tab open for easy reference while working with the model.

Download the latest Stable Diffusion release from GitHub by going to Code and selecting Download ZIP.

How to take a portrait with Stable Diffusion. Upload photos for processing

Launch Stable Diffusion from the provided link. Click File and then Keep a copy in Drive.

A new tab will open. Choose Connect from the top-right. Wait until icons for RAM and Disc replace the Link label.

On the left, observe the system type assigned by Google. Use the Check type of GPU and VRAM available button to confirm. Ideal options include Tesla T4 or Tesla P100. If a different GPU is detected, reseat the card to ensure proper recognition.

Under Installation Requirements, click the icon to install on a virtual machine. The setup completes in about a minute.

Return to the Stable Diffusion 4 page and access the token area discussed earlier. Click Copy.

The program offers a Register with HuggingFace option. In the Token field, paste the copied token via Ctrl+V. If successful, a green check appears next to the token button.

Scroll to Install xformers and begin the process with Play.

In Settings and Run, enable Save to Google Drive. In Model Name, assign a unique label for your portraits. In OUTPUT_DIR, designate the folder for saved results, then click Play to open the program and link Google Drive.

In Start Training, click Play again. You will see an option to Upload your images; select and place your photos accordingly.

Click Perform Conversion to start training. Avoid enabling fp16 to preserve quality.

Note: periodically the program prompts for confirmation of activity. If you miss it, the session ends prematurely and inputs must be reentered.

Once finished, the portrait model uploads to Google Drive and becomes usable in other applications.

Create digital portraits from photos

Continue in the Stable Diffusion environment. Find the Run for Generation Images section located beneath the image upload area.

In the fast: section, you can set the background color, the artist style to imitate, and any additional details. You can also use ready-made prompts from reputable sources or video tutorials to inspire exciting results. The description should begin with a unique name and specify the portrait type (for example, male, female, dog, or cat).

Also:

  • Number of samples controls how many distinct portraits you’ll get; four is generally a good starting point.
  • Guidance scale and output scale determine the influence of the text on the final image. For initial experiments, default values around 7.5 and 50 work well.
  • Width and height typically stay at 512 by 512 for consistent results.

Press Play in the upper left to generate a portrait.

How to write an interesting text for a future portrait

  1. Visit Lexica.art to explore prompts and styles.
  2. Choose a design whose vibe you like. Open it to view the full descriptive text used to create the image.
  3. On the left, select Quick Copy to save the prompt to your clipboard.
  4. Return to Stable Diffusion and paste the prompt in the fast: line. Begin with your unique name and specify the portrait type (male, female, dog, or cat).

You can trim terms that don’t align with your vision. The result can differ significantly from the original image you saw on the reference site.

Citation: VG Times

No time to read?
Get a summary
Previous Article

Modelo 77 and the Prison Experience: A Look at a Transitional Era

Next Article

Strike in Alicante Footwear Sector Presses for Inflation-Proof Wages