Impact
The latest trend on the internet is easy access to Image generators like midjourney, Dall-e 2 or stable diffusion. We a few of these behind paywalls. I'll show you with a few lines of code you can get started on using your own on in AWS Sagemaker Notebook instance!
Thanks to public release by stable diffusion, you can download and run the ai model generating the images and sometimes the results can be horrifying, and other times just elegant. The examples provided below where simply "realistic pikachu", and "old man with a hat". Results will be different each time the prompt is ran!
it is hard to believe this person isn't real. However, do these image generations have other practical application other than art work? Of course, they'll have an impact to come in design industry with a few simple prompts someone can get a new inspiring design such a piece of furniture!
prerequisite
1.Sign up for an account on Huggingface
- Accept Terms of Services.
- Generate token
Launching the notebook
WARNING: Sagemaker Notebook costs money to run! Make sure you clean up the instance as soon as you are done in order to avoid costs! Proceed at your own risk!
- Login to AWS Console.
- Search for Amazon Sagemaker, and select it.
- Click on Notebook Instance from left hand menu
- Click Create Instance
- Select a Name from the Instance. Instant type MUSt be Accelerated computing instance! Select ml.p2.xlarge or better
Permissions and encryption leave default
Open Up network settings. Set it into the default VPC. it will need internet access.
- Select Launch Instance!
Configuring the notebook
If you've followed along so far, it will take a bit for the instance to get prepared. However, we need to do some changes to the instance and add some packages. We'll modify an existing environment variable.
- Once instance is ready. Select Open Jupyter Lab.
Select Terminal
Activate the Conda Environment
source /home/ec2-user/anaconda3/etc/profile.d/conda.sh
conda activate pytorch_38
- Install required packages for this to work.
pip install diffusers==0.2.3 transformers scipy
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
- Configure huggingface-cli login
huggingface-cli login
this will prompt you for the token from earlier
Output if successful
Login successful
Your token has been saved to /home/ec2-user/.huggingface/token
Authenticated through git-credential store but this isn't the helper defined on your machine.
You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default
The Code!
We're done with the configuration! Select File -> New -> New Notebook.
Copy paste the following code into the cell
from torch import autocast
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
lms = LMSDiscreteScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear"
)
# this will substitute the default PNDM scheduler for K-LMS
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
scheduler=lms,
use_auth_token=True
).to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt)["sample"][0]
image.save("astronaut_rides_horse.png")
- Click Run! NOTE: first run will take a bit as it downloads the AI model. Final output should be a beautiful photo and should look different than mine!
conclusion
You can use this to generate many different type of images by editing the prompt and file name. the results will almost never be the same. this is just basic framework and much more complex system could be created to generate images based on a web request. Easily launched on an EC2 instance with a GPU attached to generate images and push to S3.
WARNING DO NOT FORGET TO TERMINATE NOTEBOOK TO STOP CHARGES
Top comments (0)