{"id":4055,"date":"2023-11-04T23:14:02","date_gmt":"2023-11-04T23:14:02","guid":{"rendered":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/"},"modified":"2023-11-05T05:48:02","modified_gmt":"2023-11-05T05:48:02","slug":"how-to-use-llms-for-text-generation-and-completion","status":"publish","type":"post","link":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/","title":{"rendered":"How to use LLMs for text generation and completion"},"content":{"rendered":"

Language Models (LMs) have revolutionized natural language processing tasks such as text generation and completion. LMs like GPT-3 (Generative Pre-trained Transformer 3) have achieved remarkable results in generating coherent and contextually accurate text.<\/p>\n

One of the popular approaches to building LMs is using the concept of Long Short-Term Memory (LSTM) networks. LSTMs are a type of recurrent neural network (RNN) that can model long-term dependencies in sequential data. In this tutorial, we will explore how to use LSTMs for text generation and completion. Specifically, we will cover the following topics:<\/p>\n

    \n
  1. Understanding LSTMs<\/li>\n
  2. Preparing the Data<\/li>\n
  3. Building the LSTM Model<\/li>\n
  4. Training the Model<\/li>\n
  5. Generating Text<\/li>\n
  6. Completing Text<\/li>\n<\/ol>\n

    Understanding LSTMs<\/h2>\n

    Before diving into the implementation details, let’s understand the basic working of LSTMs. LSTMs are designed to overcome the limitations of standard RNNs in capturing long-term dependencies. They achieve this by introducing a memory cell and three gating mechanisms: the input gate, forget gate, and output gate.<\/p>\n

    The memory cell stores information over long sequences, while the gating mechanisms regulate the flow of information into and out of the cell. The input gate determines how much new information should be stored in the memory cell, the forget gate decides how much old information should be discarded, and the output gate controls how much information should be output to the next layers.<\/p>\n

    This architecture allows LSTMs to selectively remember or forget information from previous time steps, making them well-suited for tasks like text generation and completion.<\/p>\n

    Preparing the Data<\/h2>\n

    To train an LSTM model for text generation and completion, we first need to prepare the data. The data should be in a suitable format and organized such that it captures the sequential nature of the text.<\/p>\n

    Here is an example of how the data can be organized:<\/p>\n

    input_sequence -> target_sequence\n<\/code><\/pre>\n

    For instance, suppose we want to generate text based on the prompt “Once upon a time, there was a” and complete it with an appropriate ending. The corresponding input and target sequences can be as follows:<\/p>\n

    \"Once upon a time, there was a\" -> \" little girl who lived in a magical forest.\"\n<\/code><\/pre>\n

    It is important to prepare a dataset comprising numerous input-target sequence pairs for effective training.<\/p>\n

    Building the LSTM Model<\/h2>\n

    Once the data is prepared, we can proceed to build the LSTM model. We will use Keras, a high-level deep learning library, for this purpose. Keras provides easy-to-use APIs for building and training neural networks.<\/p>\n

    To build the LSTM model, we need to import the required libraries and create the model using the Sequential API provided by Keras. Here is a sample code snippet to build the LSTM model:<\/p>\n

    from keras.models import Sequential\nfrom keras.layers import LSTM, Dense\n\nmodel = Sequential()\nmodel.add(LSTM(units=256, input_shape=(num_timesteps, num_features)))\nmodel.add(Dense(units=num_features, activation='softmax'))\n<\/code><\/pre>\n

    In this example, we create a sequential model and add an LSTM layer with 256 units. The input_shape<\/code> parameter specifies the shape of each input sequence. Adjusting the number of LSTM units can help control the complexity and expressive power of the model. Finally, we add a Dense layer with a softmax activation function, which allows the model to output probabilities over the vocabulary.<\/p>\n

    Training the Model<\/h2>\n

    Next, we need to train the LSTM model using the prepared dataset. Training the model involves feeding the input sequences to the model and updating its parameters using an optimization algorithm such as stochastic gradient descent (SGD).<\/p>\n

    To train the model, we need to compile it with an appropriate loss function and optimizer. Here is a code snippet to compile the model:<\/p>\n

    model.compile(loss='categorical_crossentropy', optimizer='adam')\n<\/code><\/pre>\n

    In this example, we use the categorical cross-entropy loss function, which is suitable for multi-class classification problems. The Adam optimizer is used as it adapts the learning rate during training, leading to faster convergence.<\/p>\n

    Once the model is compiled, we can train it by calling the fit()<\/code> function and passing in the input and target sequences:<\/p>\n

    model.fit(input_sequences, target_sequences, epochs=num_epochs, batch_size=batch_size)\n<\/code><\/pre>\n

    Make sure to adjust the num_epochs<\/code> and batch_size<\/code> parameters based on the size of your dataset and available computing resources.<\/p>\n

    Generating Text<\/h2>\n

    After the LSTM model is trained, we can use it to generate new text based on a given prompt. Text generation involves providing an initial input sequence to the model and sampling predicted words to form the output sequence.<\/p>\n

    To generate text, we can utilize the trained LSTM model by calling the predict()<\/code> function. Here is a sample code snippet to generate text:<\/p>\n

    initial_sequence = \"Once upon a time, there was a\"\ngenerated_text = initial_sequence\n\nfor _ in range(max_length):\n    input_sequence = tokenizer.texts_to_sequences([generated_text])[0]\n    input_sequence = pad_sequences([input_sequence], maxlen=max_length)\n    predicted_word_index = np.argmax(model.predict(input_sequence))\n    predicted_word = tokenizer.index_word[predicted_word_index]\n    generated_text += \" \" + predicted_word\n\nprint(generated_text)\n<\/code><\/pre>\n

    In this example, max_length<\/code> refers to the maximum length of the generated text. The tokenizer<\/code> is used to convert words to integers and vice versa. By sampling predicted words using np.argmax()<\/code>, we select the most probable word based on the current input sequence.<\/p>\n

    Completing Text<\/h2>\n

    In addition to generating text, LSTMs can be used to complete partial text by predicting the missing words. Text completion involves providing an incomplete sequence to the model and predicting the missing words.<\/p>\n

    To complete text, we can utilize the trained LSTM model in a similar manner as text generation. Here is a sample code snippet to complete text:<\/p>\n

    partial_sequence = \"Once upon a time, there was a\"\ncompleted_text = partial_sequence\n\nfor _ in range(max_length):\n    input_sequence = tokenizer.texts_to_sequences([completed_text])[0]\n    input_sequence = pad_sequences([input_sequence], maxlen=max_length)\n    predicted_word_index = np.argmax(model.predict(input_sequence))\n    predicted_word = tokenizer.index_word[predicted_word_index]\n    if predicted_word == \"<end>\":\n        break\n    completed_text += \" \" + predicted_word\n\nprint(completed_text)\n<\/code><\/pre>\n

    In this example, we use a special token “” as the stop condition. If the model predicts this token, we stop and consider the text completed.<\/p>\n

    Conclusion<\/h2>\n

    In this tutorial, we have explored how to use LSTMs for text generation and completion. We started by understanding the basics of LSTMs and their architecture. Then, we discussed the process of preparing the data, building the LSTM model, and training it using the prepared dataset. Finally, we learned how to generate and complete text using the trained LSTM model.<\/p>\n

    LSTMs have proven to be effective in generating coherent and contextually accurate text. With further advancements in language models, the quality of generated and completed text is expected to improve even more.<\/p>\n","protected":false},"excerpt":{"rendered":"

    Language Models (LMs) have revolutionized natural language processing tasks such as text generation and completion. LMs like GPT-3 (Generative Pre-trained Transformer 3) have achieved remarkable results in generating coherent and contextually accurate text. One of the popular approaches to building LMs is using the concept of Long Short-Term Memory (LSTM) Continue Reading<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[1],"tags":[53,39,1090,1092,230,1098,1093,1046,1097,504,760,245,41,40,1096,1095,1088,1087,1091,1094,742,502,1089],"yoast_head":"\nHow to use LLMs for text generation and completion - Pantherax Blogs<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to use LLMs for text generation and completion\" \/>\n<meta property=\"og:description\" content=\"Language Models (LMs) have revolutionized natural language processing tasks such as text generation and completion. LMs like GPT-3 (Generative Pre-trained Transformer 3) have achieved remarkable results in generating coherent and contextually accurate text. One of the popular approaches to building LMs is using the concept of Long Short-Term Memory (LSTM) Continue Reading\" \/>\n<meta property=\"og:url\" content=\"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\" \/>\n<meta property=\"og:site_name\" content=\"Pantherax Blogs\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-04T23:14:02+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-05T05:48:02+00:00\" \/>\n<meta name=\"author\" content=\"Panther\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Panther\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t \"@context\": \"https:\/\/schema.org\",\n\t \"@graph\": [\n\t {\n\t \"@type\": \"Article\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/#article\",\n\t \"isPartOf\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\"\n\t },\n\t \"author\": {\n\t \"name\": \"Panther\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7\"\n\t },\n\t \"headline\": \"How to use LLMs for text generation and completion\",\n\t \"datePublished\": \"2023-11-04T23:14:02+00:00\",\n\t \"dateModified\": \"2023-11-05T05:48:02+00:00\",\n\t \"mainEntityOfPage\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\"\n\t },\n\t \"wordCount\": 910,\n\t \"publisher\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\"\n\t },\n\t \"keywords\": [\n\t \"\\\"AI technology\\\"\",\n\t \"\\\"Artificial Intelligence\\\"\",\n\t \"\\\"BERT\\\"\",\n\t \"\\\"data pre-processing\\\"\",\n\t \"\\\"deep learning\\\"]\",\n\t \"\\\"enhancing language models\\\"]\",\n\t \"\\\"fine-tuning LLMs\\\"\",\n\t \"\\\"GPT-3\\\"\",\n\t \"\\\"improving text generation\\\"\",\n\t \"\\\"language models\\\"\",\n\t \"\\\"language understanding\\\"\",\n\t \"\\\"LLMs\\\"\",\n\t \"\\\"Machine Learning\\\"\",\n\t \"\\\"Natural Language Processing\\\"\",\n\t \"\\\"NLP advancements\\\"\",\n\t \"\\\"text completion examples\\\"\",\n\t \"\\\"text completion methods\\\"\",\n\t \"\\\"text completion\\\"\",\n\t \"\\\"text generation algorithms\\\"\",\n\t \"\\\"text generation applications\\\"\",\n\t \"\\\"Text generation techniques\\\"\",\n\t \"\\\"text generation\\\"\",\n\t \"\\\"transformer models\\\"\"\n\t ],\n\t \"inLanguage\": \"en-US\"\n\t },\n\t {\n\t \"@type\": \"WebPage\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\",\n\t \"url\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\",\n\t \"name\": \"How to use LLMs for text generation and completion - Pantherax Blogs\",\n\t \"isPartOf\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#website\"\n\t },\n\t \"datePublished\": \"2023-11-04T23:14:02+00:00\",\n\t \"dateModified\": \"2023-11-05T05:48:02+00:00\",\n\t \"breadcrumb\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/#breadcrumb\"\n\t },\n\t \"inLanguage\": \"en-US\",\n\t \"potentialAction\": [\n\t {\n\t \"@type\": \"ReadAction\",\n\t \"target\": [\n\t \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/\"\n\t ]\n\t }\n\t ]\n\t },\n\t {\n\t \"@type\": \"BreadcrumbList\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/#breadcrumb\",\n\t \"itemListElement\": [\n\t {\n\t \"@type\": \"ListItem\",\n\t \"position\": 1,\n\t \"name\": \"Home\",\n\t \"item\": \"http:\/\/localhost:10003\/\"\n\t },\n\t {\n\t \"@type\": \"ListItem\",\n\t \"position\": 2,\n\t \"name\": \"How to use LLMs for text generation and completion\"\n\t }\n\t ]\n\t },\n\t {\n\t \"@type\": \"WebSite\",\n\t \"@id\": \"http:\/\/localhost:10003\/#website\",\n\t \"url\": \"http:\/\/localhost:10003\/\",\n\t \"name\": \"Pantherax Blogs\",\n\t \"description\": \"\",\n\t \"publisher\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\"\n\t },\n\t \"potentialAction\": [\n\t {\n\t \"@type\": \"SearchAction\",\n\t \"target\": {\n\t \"@type\": \"EntryPoint\",\n\t \"urlTemplate\": \"http:\/\/localhost:10003\/?s={search_term_string}\"\n\t },\n\t \"query-input\": \"required name=search_term_string\"\n\t }\n\t ],\n\t \"inLanguage\": \"en-US\"\n\t },\n\t {\n\t \"@type\": \"Organization\",\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\",\n\t \"name\": \"Pantherax Blogs\",\n\t \"url\": \"http:\/\/localhost:10003\/\",\n\t \"logo\": {\n\t \"@type\": \"ImageObject\",\n\t \"inLanguage\": \"en-US\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/logo\/image\/\",\n\t \"url\": \"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg\",\n\t \"contentUrl\": \"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg\",\n\t \"width\": 1024,\n\t \"height\": 1024,\n\t \"caption\": \"Pantherax Blogs\"\n\t },\n\t \"image\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/logo\/image\/\"\n\t }\n\t },\n\t {\n\t \"@type\": \"Person\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7\",\n\t \"name\": \"Panther\",\n\t \"image\": {\n\t \"@type\": \"ImageObject\",\n\t \"inLanguage\": \"en-US\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/image\/\",\n\t \"url\": \"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g\",\n\t \"contentUrl\": \"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g\",\n\t \"caption\": \"Panther\"\n\t },\n\t \"sameAs\": [\n\t \"http:\/\/localhost:10003\"\n\t ],\n\t \"url\": \"http:\/\/localhost:10003\/author\/pepethefrog\/\"\n\t }\n\t ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to use LLMs for text generation and completion - Pantherax Blogs","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/","og_locale":"en_US","og_type":"article","og_title":"How to use LLMs for text generation and completion","og_description":"Language Models (LMs) have revolutionized natural language processing tasks such as text generation and completion. LMs like GPT-3 (Generative Pre-trained Transformer 3) have achieved remarkable results in generating coherent and contextually accurate text. One of the popular approaches to building LMs is using the concept of Long Short-Term Memory (LSTM) Continue Reading","og_url":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/","og_site_name":"Pantherax Blogs","article_published_time":"2023-11-04T23:14:02+00:00","article_modified_time":"2023-11-05T05:48:02+00:00","author":"Panther","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Panther","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/#article","isPartOf":{"@id":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/"},"author":{"name":"Panther","@id":"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7"},"headline":"How to use LLMs for text generation and completion","datePublished":"2023-11-04T23:14:02+00:00","dateModified":"2023-11-05T05:48:02+00:00","mainEntityOfPage":{"@id":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/"},"wordCount":910,"publisher":{"@id":"http:\/\/localhost:10003\/#organization"},"keywords":["\"AI technology\"","\"Artificial Intelligence\"","\"BERT\"","\"data pre-processing\"","\"deep learning\"]","\"enhancing language models\"]","\"fine-tuning LLMs\"","\"GPT-3\"","\"improving text generation\"","\"language models\"","\"language understanding\"","\"LLMs\"","\"Machine Learning\"","\"Natural Language Processing\"","\"NLP advancements\"","\"text completion examples\"","\"text completion methods\"","\"text completion\"","\"text generation algorithms\"","\"text generation applications\"","\"Text generation techniques\"","\"text generation\"","\"transformer models\""],"inLanguage":"en-US"},{"@type":"WebPage","@id":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/","url":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/","name":"How to use LLMs for text generation and completion - Pantherax Blogs","isPartOf":{"@id":"http:\/\/localhost:10003\/#website"},"datePublished":"2023-11-04T23:14:02+00:00","dateModified":"2023-11-05T05:48:02+00:00","breadcrumb":{"@id":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/"]}]},{"@type":"BreadcrumbList","@id":"http:\/\/localhost:10003\/how-to-use-llms-for-text-generation-and-completion\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"http:\/\/localhost:10003\/"},{"@type":"ListItem","position":2,"name":"How to use LLMs for text generation and completion"}]},{"@type":"WebSite","@id":"http:\/\/localhost:10003\/#website","url":"http:\/\/localhost:10003\/","name":"Pantherax Blogs","description":"","publisher":{"@id":"http:\/\/localhost:10003\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/localhost:10003\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"http:\/\/localhost:10003\/#organization","name":"Pantherax Blogs","url":"http:\/\/localhost:10003\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/localhost:10003\/#\/schema\/logo\/image\/","url":"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg","contentUrl":"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg","width":1024,"height":1024,"caption":"Pantherax Blogs"},"image":{"@id":"http:\/\/localhost:10003\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7","name":"Panther","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/localhost:10003\/#\/schema\/person\/image\/","url":"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g","contentUrl":"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g","caption":"Panther"},"sameAs":["http:\/\/localhost:10003"],"url":"http:\/\/localhost:10003\/author\/pepethefrog\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4055"}],"collection":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/comments?post=4055"}],"version-history":[{"count":1,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4055\/revisions"}],"predecessor-version":[{"id":4471,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4055\/revisions\/4471"}],"wp:attachment":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/media?parent=4055"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/categories?post=4055"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/tags?post=4055"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}