Datasets:
image
image | prompt
string | word_scores
string | alignment_score_norm
float32 | coherence_score_norm
float32 | style_score_norm
float32 | alignment_heatmap
sequence | coherence_heatmap
sequence | alignment_score
float32 | coherence_score
float32 | style_score
float32 |
---|---|---|---|---|---|---|---|---|---|---|
The bright green grass contrasted with the dull grey pavement. | "[[\"The\", 1.3686], [\"bright\", 0.7992], [\"green\", 2.4126], [\"grass\", 2.9865], [\"contrasted\"(...TRUNCATED) | 0.649863 | 0.552199 | 0.852295 | [["6.55e-05","6.57e-05","6.6e-05","6.646e-05","6.706e-05","6.79e-05","6.884e-05","6.99e-05","7.12e-0(...TRUNCATED) | null | 3.4574 | 3.5963 | 3.8143 |
|
image from an iPhone video of a dog in a supermarket, hyper realistic, flash photo | "[[\"image\", 1.5134], [\"from\", 1.5706], [\"an\", 0.983], [\"iPhone\", 1.9457], [\"video\", 2.1509(...TRUNCATED) | 0.624388 | 0.743565 | 0.896313 | null | null | 3.4003 | 3.9851 | 3.9096 |
|
"A man wearing a brown cap looking sitting at his computer with a black and brown dog resting next t(...TRUNCATED) | "[[\"A\", 1.796], [\"man\", 2.2909], [\"wearing\", 1.796], [\"a\", 1.796], [\"brown\", 2.3669], [\"c(...TRUNCATED) | 0.951001 | 0.590591 | 0.681444 | [["0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0(...TRUNCATED) | null | 4.1324 | 3.6743 | 3.4444 |
|
A beige pastry sitting in a white ball next to a spoon . | "[[\"A\", 1.5347], [\"beige\", 2.388], [\"pastry\", 4.0451], [\"sitting\", 2.0693], [\"in\", 1.8501](...TRUNCATED) | 0.386734 | 0.704485 | 0.651005 | null | [["0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0(...TRUNCATED) | 2.8676 | 3.9057 | 3.3785 |
|
"a diverse crowd of people eagerly waits in line at a bustling street food stand in beirut. the tant(...TRUNCATED) | "[[\"a\", 0.5249], [\"diverse\", 2.0174], [\"crowd\", 2.0214], [\"of\", 0.5249], [\"people\", 2.053](...TRUNCATED) | 0.780579 | 0.399815 | 0.57969 | [["0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0(...TRUNCATED) | null | 3.7504 | 3.2867 | 3.2241 |
|
photograph of a person drinking red wine and smoking weed with a flat cigarette | "[[\"photograph\", 0.3675], [\"of\", 0.414], [\"a\", 0.414], [\"person\", 0.4906], [\"drinking\", 2.(...TRUNCATED) | 0.721823 | 0.599893 | 0.396829 | [["0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0(...TRUNCATED) | null | 3.6187 | 3.6932 | 2.8282 |
|
A yellow horse and a red chair. | "[[\"A\", 1.3418], [\"yellow\", 1.7208], [\"horse\", 4.6369], [\"and\", 1.9358], [\"a\", 1.6359], [\(...TRUNCATED) | 0.687828 | 0.434368 | 0.515303 | [["0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0(...TRUNCATED) | null | 3.5425 | 3.3569 | 3.0847 |
|
A guitar made of ice cream that melts as you play it. | "[[\"A\", 1.5655], [\"guitar\", 4.3885], [\"made\", 1.7353], [\"of\", 0.8389], [\"ice\", 1.662], [\"(...TRUNCATED) | 0.815556 | 0.474039 | 0.611791 | [["0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0(...TRUNCATED) | null | 3.8288 | 3.4375 | 3.2936 |
|
a fluffy pillow and a leather belt | "[[\"a\", 1.0406], [\"fluffy\", 1.9794], [\"pillow\", 5.4818], [\"and\", 0.9528], [\"a\", 1.5035], [(...TRUNCATED) | 0.58727 | 0.221935 | 0.497705 | [["0.01262","0.01261","0.01261","0.0126","0.012596","0.01258","0.012566","0.01255","0.012535","0.012(...TRUNCATED) | null | 3.3171 | 2.9253 | 3.0466 |
|
hyperrealism fruits and vegetables market | "[[\"hyperrealism\", 6.4486], [\"fruits\", 3.1142], [\"and\", 1.4918], [\"vegetables\", 4.1241], [\"(...TRUNCATED) | 0.87846 | 0.623617 | 0.767539 | [["0.000471","0.0004714","0.000472","0.0004733","0.0004745","0.0004764","0.0004785","0.000699","0.00(...TRUNCATED) | null | 3.9698 | 3.7414 | 3.6308 |
Building upon Google's research Rich Human Feedback for Text-to-Image Generation we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the Python API. Collection took roughly 5 days.
Overview
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were asked to identify specific problematic areas. Additionally, for all images, participants identified words from the prompts that were not accurately represented in the generated images.
If you want to replicate the annotation setup, the steps are outlined at the bottom.
This dataset and the annotation process is described in further detail in our blog post Beyond Image Preferences.
Word Scores
Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
Examples Results:
Coherence
The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received at least 21 responses indicating the level of coherence on a scale of 1-5, which were then averaged to produce the final scores where 5 indicates the highest coherence.
Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
Example Results:
Alignment
The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". Again, each image received at least 21 responses indicating the level of alignment on a scale of 1-5 (5 being the highest), which were then averaged.
For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
As mentioned in the google paper, aligment is harder to annotate consistently, if e.g. an object is missing, it is unclear to the annotators what they need to highlight.
Example Results:
Style
The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses grading on a scale of 1-5, which were then averaged. In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.
About Rapidata
Rapidata's technology makes collecting human feedback at scale faster and more accessible than ever before. Visit rapidata.ai to learn more about how we're revolutionizing human feedback collection for AI development.
Other Datasets
We run a benchmark of the major image generation models, the results can be found on our website. We rank the models according to their coherence/plausiblity, their aligment with the given prompt and style prefernce. The underlying 2M+ annotations can be found here:
- Link to the Coherence dataset
- Link to the Text-2-Image Alignment dataset
- Link to the Preference dataset
We have also started to run a video generation benchmark, it is still a work in progress and currently only covers 2 models. They are also analysed in coherence/plausiblity, alignment and style preference.
Replicating the Annotation Setup
For researchers interested in producing their own rich preference dataset, you can directly use the Rapidata API through python. The code snippets below show how to replicate the modalities used in the dataset. Additional information is available through the documentation
Creating the Rapidata Client and Downloading the Dataset
First install the rapidata package, then create the RapidataClient() this will be used create and launch the annotation setup pip install rapidata
from rapidata import RapidataClient, LabelingSelection, ValidationSelection
client = RapidataClient()
As example data we will just use images from the dataset. Make sure to set streaming=True
as downloading the whole dataset might take a significant amount of time.
from datasets import load_dataset
ds = load_dataset("Rapidata/text-2-image-Rich-Human-Feedback", split="train", streaming=True)
ds = ds.select_columns(["image","prompt"])
Since we use streaming, we can extract the prompts and download the images we need like this:
import os
tmp_folder = "demo_images"
# make folder if it doesn't exist
if not os.path.exists(tmp_folder):
os.makedirs(tmp_folder)
prompts = []
image_paths = []
for i, row in enumerate(ds.take(10)):
prompts.append(row["prompt"])
# save image to disk
save_path = os.path.join(tmp_folder, f"{i}.jpg")
row["image"].save(save_path)
image_paths.append(save_path)
Likert Scale Alignment Score
To launch a likert scale annotation order, we make use of the classification annotation modality. Below we show the setup for the alignment criteria. The structure is the same for style and coherence, however arguments have to be adjusted of course. I.e. different instructions, options and validation set.# Alignment Example
instruction = "How well does the image match the description?"
answer_options = [
"1: Not at all",
"2: A little",
"3: Moderately",
"4: Very well",
"5: Perfectly"
]
order = client.order.create_classification_order(
name="Alignment Example",
instruction=instruction,
answer_options=answer_options,
datapoints=image_paths,
contexts=prompts, # for alignment, prompts are required as context for the annotators.
responses_per_datapoint=10,
selections=[ValidationSelection("676199a5ef7af86285630ea6"), LabelingSelection(1)] # here we use a pre-defined validation set. See https://docs.rapidata.ai/improve_order_quality/ for details
)
order.run() # This starts the order. Follow the printed link to see progress.
Alignment Heatmap
To produce heatmaps, we use the locate annotation modality. Below is the setup used for creating the alignment heatmaps.# alignment heatmap
# Note that the selected images may not actually have severely misaligned elements, but this is just for demonstration purposes.
order = client.order.create_locate_order(
name="Alignment Heatmap Example",
instruction="What part of the image does not match with the description? Tap to select.",
datapoints=image_paths,
contexts=prompts, # for alignment, prompts are required as context for the annotators.
responses_per_datapoint=10,
selections=[ValidationSelection("67689e58026456ec851f51f8"), LabelingSelection(1)] # here we use a pre-defined validation set for alignment. See https://docs.rapidata.ai/improve_order_quality/ for details
)
order.run() # This starts the order. Follow the printed link to see progress.
Select Misaligned Words
To launch the annotation setup for selection of misaligned words, we used the following setup# Select words example
from rapidata import LanguageFilter
select_words_prompts = [p + " [No_Mistake]" for p in prompts]
order = client.order.create_select_words_order(
name="Select Words Example",
instruction = "The image is based on the text below. Select mistakes, i.e., words that are not aligned with the image.",
datapoints=image_paths,
sentences=select_words_prompts,
responses_per_datapoint=10,
filters=[LanguageFilter(["en"])], # here we add a filter to ensure only english speaking annotators are selected
selections=[ValidationSelection("6761a86eef7af86285630ea8"), LabelingSelection(1)] # here we use a pre-defined validation set. See https://docs.rapidata.ai/improve_order_quality/ for details
)
order.run()
- Downloads last month
- 10