Create: Advance settings

Moemate is an extremely versatile and customizable platform. Find some tips and tricks for editing Advanced Settings fields below.

Max New Tokens:

A token is essentially a word. The Max New Tokens field is used to cap the token response generated by a language model. Note this does not mean the language model will always generate up to max_tokens; it may stop for other reasons but it will definitely stop at max_tokens.

Uses:

Prevent long-winded responses from your character.

Temperature:

Controls the randomness of the language model.

Uses:

Higher values will make the output more random, while lower values will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

Risks

Higher values may make the output less coherent, while lower values may make it dull and repetitive.

Top K:

An alternative to sampling with Temperature or Top P. Top K sampling refers to only sampling the top K tokens with the highest probability.

Uses

This works well in cases of narrow band distributions (i.e. small probability of choosing a token that seems completely random), because it removes this possibility.

Higher values will make the output more random, while lower values will make it more focused and deterministic.

Risks

Higher values may make the output less coherent, while lower values may make it dull and repetitive.

Top P:

An alternative to sampling with temperature. Controls the randomness of the language model. We generally recommend altering this or temperature but not both.

Uses

Higher values will make the output more random, while lower values will make it more focused and deterministic.

Risks

Higher values may make the output less coherent, while lower values may make it dull and repetitive.

Typical P:

An alternative to sampling with temperature or top P.

Uses

Controls the token sampling according to the expected information content given the previous partial text. Lower values will make the output more random and diverse, while higher values will allow it to be more deterministic.

Lower values maintain quality of responses (similar to Top P) with the added benefit of reducing the degenerate repetition case.

Risks

Lower values may have the risk of generating less coherent text.

Stable diffusion advance settings:

Num Inference steps:

The number of denoising steps from an image of random noise in the direction of a prompt.

Uses

Higher values means higher level of detail in the image.

Risks

Higher values run the risk of adding unnecessary details while lower values run the risk of the image not being fully formed.

Guidance Scale:

Controls how similar the image will be to the provided prompt.

Uses

Higher guidance scale means the model will try to generate an image that follows the prompt more strictly. A lower guidance scale means the model will have more creativity.

Risks

Higher guidance scale usually means less quality in the image.

Clip Skip:

A metric to control the accuracy of the text model used in the diffusion process. This is done by "skipping layers" of the CLIP model where each layer adds definition in a description sense.

Uses

Higher values correspond to faster generation speed and more creativity.

Risks

Higher values run the risk of inaccurate generations.

Negative Prompts:

Use this to specify what you do not want to see.

Here are some examples: ugly, disfigured, multiple hands, extra hands bad anatomy, bad proportions, poorly drawn hands, extra legs, mangled fingers, missing lip

Multilingual:

Allows multilingual prompts to generate images. The default is English.

Self Attention:

Setting this parameter to 'on', will improve the quality of the image at the cost of speed.

Upscale:

Set this parameter to "yes" if you want to upscale the given image resolution two times (2x). If the requested resolution is 512 x 512 px, the generated image will be 1024 x 1024 px.

Use Tomesd:

Significantly speeds up the diffusion of tokens by merging redundant tokens.

Use Karras Sigmas:

Improves the quality of the image by using the Karras Sigmas scheduler.

It adjusts the timesteps of the scheduler to minimize the truncation error evident in early stages of denoising.

Last updated