{"id":4047,"date":"2023-11-04T23:14:01","date_gmt":"2023-11-04T23:14:01","guid":{"rendered":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/"},"modified":"2023-11-05T05:48:23","modified_gmt":"2023-11-05T05:48:23","slug":"how-to-use-openai-dall-e-for-image-composition","status":"publish","type":"post","link":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/","title":{"rendered":"How to Use OpenAI DALL-E for Image Composition"},"content":{"rendered":"

\"DALL-E\"<\/p>\n

Image Source: OpenAI<\/em><\/p>\n

Introduction<\/h2>\n

OpenAI DALL-E is an advanced AI model that can generate high-quality images from textual descriptions. It has the capability to understand and generate images based on natural language instructions. DALL-E is powered by a combination of transformer-based models and advanced generative techniques.<\/p>\n

In this tutorial, we will learn how to use OpenAI DALL-E for image composition, by providing textual prompts for generating unique and creative images. We will cover the following topics:<\/p>\n

    \n
  1. Setting Up DALL-E<\/li>\n
  2. Composing Images with DALL-E<\/li>\n
  3. Modifying Generation Parameters<\/li>\n
  4. Controlling Image Content<\/li>\n
  5. Advanced Techniques for Image Composition<\/li>\n<\/ol>\n

    Let’s get started!<\/p>\n

    1. Setting Up DALL-E<\/h2>\n

    Before we can start using DALL-E for image composition, we need to set it up on our system. OpenAI provides a Python library called dall_e<\/code> for interacting with the model. To install the library, run the following command:<\/p>\n

    pip install dall-e\n<\/code><\/pre>\n

    Once the installation is complete, we can import the necessary modules in our Python script:<\/p>\n

    import numpy as np\nimport torch\nimport PIL\nfrom PIL import Image\n\nfrom dall_e import utils\nfrom dall_e import models\n<\/code><\/pre>\n

    2. Composing Images with DALL-E<\/h2>\n

    To compose an image using DALL-E, we need to provide a textual prompt that describes the desired image. DALL-E will generate an image based on this prompt. Let’s see how it’s done:<\/p>\n

    # Load the pre-trained DALL-E model\nmodel = models.load_model(\"dalle.pt\")\n\n# Encode the prompt text into a latent vector\ntext = \"a landscape with a red sunset\"\ntext_encoded = utils.encode(model, text)\n\n# Generate an image from the latent vector\nimage = utils.decode(model, text_encoded)\n\n# Save the generated image\nimage.save(\"generated_image.png\")\n<\/code><\/pre>\n

    In the above code snippet, we first load the pre-trained DALL-E model. Then we encode the prompt text using the encode()<\/code> function provided by the dall_e.utils<\/code> module. This function converts the input text into a latent vector representation that the DALL-E model can understand.<\/p>\n

    Next, we use the decode()<\/code> function to generate an image from the latent vector. This function takes the model and the latent vector as input and returns the corresponding image.<\/p>\n

    Finally, we save the generated image using the save()<\/code> method provided by the PIL.Image<\/code> module.<\/p>\n

    3. Modifying Generation Parameters<\/h2>\n

    DALL-E provides several generation parameters that we can modify to control the style and appearance of the generated images. Let’s explore a few of these parameters:<\/p>\n

    3.1 Resolution<\/h3>\n

    By default, DALL-E generates images with a resolution of 256×256 pixels. We can modify the resolution by passing the resolution<\/code> parameter to the decode()<\/code> function. Higher resolutions will result in more detailed images, but will also require more computational resources.<\/p>\n

    # Generate a 512x512 image\nimage = utils.decode(model, text_encoded, resolution=512)\n\n# Generate a 128x128 image\nimage = utils.decode(model, text_encoded, resolution=128)\n<\/code><\/pre>\n

    In the above code snippets, we generate images with resolutions of 512×512 pixels and 128×128 pixels respectively.<\/p>\n

    3.2 Temperature<\/h3>\n

    The temperature<\/code> parameter controls the randomness of the generated images. Higher temperature values result in more random and diverse images, while lower values result in more deterministic and focused images. The default value is set to 0.8.<\/p>\n

    # Generate a random image\nimage_random = utils.decode(model, text_encoded, temperature=1.0)\n\n# Generate a focused image\nimage_focused = utils.decode(model, text_encoded, temperature=0.2)\n<\/code><\/pre>\n

    In the above code snippets, we generate a random image with a temperature of 1.0, and a focused image with a temperature of 0.2.<\/p>\n

    3.3 Top Knots<\/h3>\n

    The top_k<\/code> parameter controls the number of pixels allowed to change per generation step. Lower values result in sharper and more detailed images, while higher values result in blurrier and more abstract images. The default value is set to 100.<\/p>\n

    # Generate a sharp image\nimage_sharp = utils.decode(model, text_encoded, top_k=10)\n\n# Generate a blurry image\nimage_blurry = utils.decode(model, text_encoded, top_k=500)\n<\/code><\/pre>\n

    In the above code snippets, we generate a sharp image with a top_k<\/code> value of 10, and a blurry image with a top_k<\/code> value of 500.<\/p>\n

    4. Controlling Image Content<\/h2>\n

    DALL-E allows us to control the content of the generated images by modifying the prompt text. By providing specific instructions, we can guide DALL-E to generate images with desired attributes. Let’s see a few examples:<\/p>\n

    4.1 Adding Attributes<\/h3>\n

    We can instruct DALL-E to add certain attributes to the generated image by including them in the prompt text. For example:<\/p>\n

    # Generate a colorful image\ntext = \"a landscape with a red sunset\"\nimage_colorful = utils.decode(model, utils.encode(model, text))\n\n# Generate a different color image\ntext = \"a landscape with a blue sunset\"\nimage_different_color = utils.decode(model, utils.encode(model, text))\n<\/code><\/pre>\n

    In the above code snippets, we generate an image with a red sunset and an image with a blue sunset by modifying the color attribute in the prompt text.<\/p>\n

    4.2 Removing Attributes<\/h3>\n

    We can also remove certain attributes from the generated image by specifying that in the prompt text. For example:<\/p>\n

    # Generate an image without any buildings\ntext = \"a landscape without buildings\"\nimage_no_buildings = utils.decode(model, utils.encode(model, text))\n\n# Generate an image without any trees\ntext = \"a landscape without trees\"\nimage_no_trees = utils.decode(model, utils.encode(model, text))\n<\/code><\/pre>\n

    In the above code snippets, we generate an image without any buildings and an image without any trees by specifying that in the prompt text.<\/p>\n

    4.3 Combining Attributes<\/h3>\n

    We can combine multiple attributes to generate images with complex compositions. For example:<\/p>\n

    # Generate an image with a blue sky and green trees\ntext = \"a landscape with a blue sky and green trees\"\nimage_sky_trees = utils.decode(model, utils.encode(model, text))\n\n# Generate an image with a beach and palm trees\ntext = \"a tropical beach with palm trees\"\nimage_beach_palm = utils.decode(model, utils.encode(model, text))\n<\/code><\/pre>\n

    In the above code snippets, we generate an image with a blue sky and green trees, and an image with a beach and palm trees by combining multiple attributes in the prompt text.<\/p>\n

    5. Advanced Techniques for Image Composition<\/h2>\n

    DALL-E provides several advanced techniques that can be used to enhance image composition. Let’s explore a few of these techniques:<\/p>\n

    5.1 Interpolation<\/h3>\n

    Interpolation allows us to generate images that smoothly transition between two different prompts. We can use the interpolate()<\/code> function provided by the dall_e.utils<\/code> module to perform interpolation. This function takes two latent vectors and returns a sequence of intermediate latent vectors that can be used to generate the corresponding interpolated images.<\/p>\n

    # Encode the start and end prompt texts into latent vectors\nstart_text = \"a landscape with a blue sky\"\nend_text = \"a landscape with a red sunset\"\nstart_encoded = utils.encode(model, start_text)\nend_encoded = utils.encode(model, end_text)\n\n# Perform interpolation between the latent vectors\ninterpolated_latents = utils.interpolate(start_encoded, end_encoded)\n\n# Generate images from the interpolated latent vectors\ninterpolated_images = [utils.decode(model, latent) for latent in interpolated_latents]\n<\/code><\/pre>\n

    In the above code snippet, we first encode the start and end prompt texts into latent vectors. Then we use the interpolate()<\/code> function to generate a sequence of intermediate latent vectors. Finally, we generate the corresponding interpolated images using the decode()<\/code> function.<\/p>\n

    5.2 Fine-Tuning<\/h3>\n

    DALL-E can be fine-tuned on custom datasets to specialize the generated images for specific use cases. This process involves training DALL-E on a new dataset using methods like unsupervised learning, reinforcement learning, or transfer learning.<\/p>\n

    Although training DALL-E from scratch can be computationally expensive, OpenAI has released a simplified version of the training code, called dalle-mini<\/code>, which can be used for fine-tuning on smaller datasets. The trained models can then be used for image composition with improved control and customization.<\/p>\n

    Conclusion<\/h2>\n

    In this tutorial, we learned how to use OpenAI DALL-E for image composition. We covered the steps involved in setting up DALL-E, composing images with textual prompts, modifying generation parameters, and controlling image content. We also explored advanced techniques like interpolation and fine-tuning for enhancing image composition.<\/p>\n

    DALL-E opens up exciting possibilities for generating unique and creative images based on natural language instructions. It can be used for various applications such as art and design, content generation, and visual storytelling. With further improvements and advancements, DALL-E has the potential to revolutionize the way we create and interact with visual media.<\/p>\n","protected":false},"excerpt":{"rendered":"

    Image Source: OpenAI Introduction OpenAI DALL-E is an advanced AI model that can generate high-quality images from textual descriptions. It has the capability to understand and generate images based on natural language instructions. DALL-E is powered by a combination of transformer-based models and advanced generative techniques. In this tutorial, we Continue Reading<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[1],"tags":[1049,834,1048,1051,1050,224],"yoast_head":"\nHow to Use OpenAI DALL-E for Image Composition - Pantherax Blogs<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Use OpenAI DALL-E for Image Composition\" \/>\n<meta property=\"og:description\" content=\"Image Source: OpenAI Introduction OpenAI DALL-E is an advanced AI model that can generate high-quality images from textual descriptions. It has the capability to understand and generate images based on natural language instructions. DALL-E is powered by a combination of transformer-based models and advanced generative techniques. In this tutorial, we Continue Reading\" \/>\n<meta property=\"og:url\" content=\"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\" \/>\n<meta property=\"og:site_name\" content=\"Pantherax Blogs\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-04T23:14:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-05T05:48:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cdn.openai.com\/dall-e\/what-is-dalle\/dalle_vase_images.png\" \/>\n<meta name=\"author\" content=\"Panther\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Panther\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t \"@context\": \"https:\/\/schema.org\",\n\t \"@graph\": [\n\t {\n\t \"@type\": \"Article\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/#article\",\n\t \"isPartOf\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\"\n\t },\n\t \"author\": {\n\t \"name\": \"Panther\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7\"\n\t },\n\t \"headline\": \"How to Use OpenAI DALL-E for Image Composition\",\n\t \"datePublished\": \"2023-11-04T23:14:01+00:00\",\n\t \"dateModified\": \"2023-11-05T05:48:23+00:00\",\n\t \"mainEntityOfPage\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\"\n\t },\n\t \"wordCount\": 971,\n\t \"publisher\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\"\n\t },\n\t \"keywords\": [\n\t \"\\\"AI image generation\\\"\",\n\t \"\\\"creative AI\\\"\",\n\t \"\\\"image composition\\\"\",\n\t \"\\\"image manipulation techniques\\\"]\",\n\t \"\\\"neural network art\\\"\",\n\t \"[\\\"OpenAI DALL-E\\\"\"\n\t ],\n\t \"inLanguage\": \"en-US\"\n\t },\n\t {\n\t \"@type\": \"WebPage\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\",\n\t \"url\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\",\n\t \"name\": \"How to Use OpenAI DALL-E for Image Composition - Pantherax Blogs\",\n\t \"isPartOf\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#website\"\n\t },\n\t \"datePublished\": \"2023-11-04T23:14:01+00:00\",\n\t \"dateModified\": \"2023-11-05T05:48:23+00:00\",\n\t \"breadcrumb\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/#breadcrumb\"\n\t },\n\t \"inLanguage\": \"en-US\",\n\t \"potentialAction\": [\n\t {\n\t \"@type\": \"ReadAction\",\n\t \"target\": [\n\t \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/\"\n\t ]\n\t }\n\t ]\n\t },\n\t {\n\t \"@type\": \"BreadcrumbList\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/#breadcrumb\",\n\t \"itemListElement\": [\n\t {\n\t \"@type\": \"ListItem\",\n\t \"position\": 1,\n\t \"name\": \"Home\",\n\t \"item\": \"http:\/\/localhost:10003\/\"\n\t },\n\t {\n\t \"@type\": \"ListItem\",\n\t \"position\": 2,\n\t \"name\": \"How to Use OpenAI DALL-E for Image Composition\"\n\t }\n\t ]\n\t },\n\t {\n\t \"@type\": \"WebSite\",\n\t \"@id\": \"http:\/\/localhost:10003\/#website\",\n\t \"url\": \"http:\/\/localhost:10003\/\",\n\t \"name\": \"Pantherax Blogs\",\n\t \"description\": \"\",\n\t \"publisher\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\"\n\t },\n\t \"potentialAction\": [\n\t {\n\t \"@type\": \"SearchAction\",\n\t \"target\": {\n\t \"@type\": \"EntryPoint\",\n\t \"urlTemplate\": \"http:\/\/localhost:10003\/?s={search_term_string}\"\n\t },\n\t \"query-input\": \"required name=search_term_string\"\n\t }\n\t ],\n\t \"inLanguage\": \"en-US\"\n\t },\n\t {\n\t \"@type\": \"Organization\",\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\",\n\t \"name\": \"Pantherax Blogs\",\n\t \"url\": \"http:\/\/localhost:10003\/\",\n\t \"logo\": {\n\t \"@type\": \"ImageObject\",\n\t \"inLanguage\": \"en-US\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/logo\/image\/\",\n\t \"url\": \"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg\",\n\t \"contentUrl\": \"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg\",\n\t \"width\": 1024,\n\t \"height\": 1024,\n\t \"caption\": \"Pantherax Blogs\"\n\t },\n\t \"image\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/logo\/image\/\"\n\t }\n\t },\n\t {\n\t \"@type\": \"Person\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7\",\n\t \"name\": \"Panther\",\n\t \"image\": {\n\t \"@type\": \"ImageObject\",\n\t \"inLanguage\": \"en-US\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/image\/\",\n\t \"url\": \"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g\",\n\t \"contentUrl\": \"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g\",\n\t \"caption\": \"Panther\"\n\t },\n\t \"sameAs\": [\n\t \"http:\/\/localhost:10003\"\n\t ],\n\t \"url\": \"http:\/\/localhost:10003\/author\/pepethefrog\/\"\n\t }\n\t ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to Use OpenAI DALL-E for Image Composition - Pantherax Blogs","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/","og_locale":"en_US","og_type":"article","og_title":"How to Use OpenAI DALL-E for Image Composition","og_description":"Image Source: OpenAI Introduction OpenAI DALL-E is an advanced AI model that can generate high-quality images from textual descriptions. It has the capability to understand and generate images based on natural language instructions. DALL-E is powered by a combination of transformer-based models and advanced generative techniques. In this tutorial, we Continue Reading","og_url":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/","og_site_name":"Pantherax Blogs","article_published_time":"2023-11-04T23:14:01+00:00","article_modified_time":"2023-11-05T05:48:23+00:00","og_image":[{"url":"https:\/\/cdn.openai.com\/dall-e\/what-is-dalle\/dalle_vase_images.png"}],"author":"Panther","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Panther","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/#article","isPartOf":{"@id":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/"},"author":{"name":"Panther","@id":"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7"},"headline":"How to Use OpenAI DALL-E for Image Composition","datePublished":"2023-11-04T23:14:01+00:00","dateModified":"2023-11-05T05:48:23+00:00","mainEntityOfPage":{"@id":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/"},"wordCount":971,"publisher":{"@id":"http:\/\/localhost:10003\/#organization"},"keywords":["\"AI image generation\"","\"creative AI\"","\"image composition\"","\"image manipulation techniques\"]","\"neural network art\"","[\"OpenAI DALL-E\""],"inLanguage":"en-US"},{"@type":"WebPage","@id":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/","url":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/","name":"How to Use OpenAI DALL-E for Image Composition - Pantherax Blogs","isPartOf":{"@id":"http:\/\/localhost:10003\/#website"},"datePublished":"2023-11-04T23:14:01+00:00","dateModified":"2023-11-05T05:48:23+00:00","breadcrumb":{"@id":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/"]}]},{"@type":"BreadcrumbList","@id":"http:\/\/localhost:10003\/how-to-use-openai-dall-e-for-image-composition\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"http:\/\/localhost:10003\/"},{"@type":"ListItem","position":2,"name":"How to Use OpenAI DALL-E for Image Composition"}]},{"@type":"WebSite","@id":"http:\/\/localhost:10003\/#website","url":"http:\/\/localhost:10003\/","name":"Pantherax Blogs","description":"","publisher":{"@id":"http:\/\/localhost:10003\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/localhost:10003\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"http:\/\/localhost:10003\/#organization","name":"Pantherax Blogs","url":"http:\/\/localhost:10003\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/localhost:10003\/#\/schema\/logo\/image\/","url":"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg","contentUrl":"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg","width":1024,"height":1024,"caption":"Pantherax Blogs"},"image":{"@id":"http:\/\/localhost:10003\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7","name":"Panther","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/localhost:10003\/#\/schema\/person\/image\/","url":"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g","contentUrl":"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g","caption":"Panther"},"sameAs":["http:\/\/localhost:10003"],"url":"http:\/\/localhost:10003\/author\/pepethefrog\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4047"}],"collection":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/comments?post=4047"}],"version-history":[{"count":1,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4047\/revisions"}],"predecessor-version":[{"id":4503,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4047\/revisions\/4503"}],"wp:attachment":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/media?parent=4047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/categories?post=4047"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/tags?post=4047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}