{"id":4102,"date":"2023-11-04T23:14:03","date_gmt":"2023-11-04T23:14:03","guid":{"rendered":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/"},"modified":"2023-11-05T05:48:01","modified_gmt":"2023-11-05T05:48:01","slug":"how-to-use-llms-for-music-analysis-and-generation","status":"publish","type":"post","link":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/","title":{"rendered":"How to use LLMs for music analysis and generation"},"content":{"rendered":"

How to Use LSTMs for Music Analysis and Generation<\/h1>\n

Introduction<\/h2>\n

Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network (RNN) architecture that is particularly effective at modeling sequences in data. It has been successfully used in many applications, including natural language processing and music generation. In this tutorial, we will explore how to use LSTMs for music analysis and generation.<\/p>\n

Prerequisites<\/h2>\n

Before we dive into implementing LSTMs for music analysis and generation, there are a few prerequisites that you should have:<\/p>\n

    \n
  1. Basic understanding of Python programming.<\/li>\n
  2. Familiarity with the concepts of machine learning and deep learning.<\/li>\n
  3. Knowledge of the Keras library.<\/li>\n<\/ol>\n

    If you are new to any of these topics, I recommend taking some time to learn them before continuing with this tutorial.<\/p>\n

    Setting Up the Environment<\/h2>\n

    To begin, let’s set up our environment by installing the necessary libraries. We will be using Python and the Keras library for this tutorial.<\/p>\n

      \n
    1. Start by installing Python, if you don’t have it already. You can download and install the latest version from the official Python website<\/a>.<\/p>\n<\/li>\n
    2. \n

      Next, open a command prompt or terminal and install the Keras library by running the following command:<\/p>\n

      pip install keras\n<\/code><\/pre>\n<\/li>\n
    3. Additionally, we will need a dataset of MIDI files to train our LSTM model. You can find MIDI files for music in various genres from websites like MIDIworld<\/a> or FreeMidi<\/a>.<\/p>\n<\/li>\n<\/ol>\n

      Once you have set up your environment, we can move on to the next step.<\/p>\n

      Preprocessing the Data<\/h2>\n

      Before we can train our LSTM model, we need to preprocess our MIDI dataset. MIDI files contain musical note information, including the pitch, duration, and velocity of each note. We will convert this information into a numerical representation that can be understood by our LSTM model.<\/p>\n

        \n
      1. First, import the necessary libraries:\n
        import glob\nimport numpy as np\nfrom music21 import converter, instrument, note, chord\n<\/code><\/pre>\n

        The glob<\/code> library helps us find all the MIDI files in a directory, while the numpy<\/code> library helps us manipulate arrays of data. The music21<\/code> library provides tools for working with music data in Python.<\/p>\n<\/li>\n

      2. \n

        Next, define a function to process the MIDI files and extract the musical note information:<\/p>\n

        def process_midi_files(directory):\n   notes = []\n\n   for file in glob.glob(directory + \"\/*.mid\"):\n       midi = converter.parse(file)\n       notes_to_parse = None\n\n       try:\n           s2 = instrument.partitionByInstrument(midi)\n           notes_to_parse = s2.parts[0].recurse()\n       except:\n           notes_to_parse = midi.flat.notes\n\n       for element in notes_to_parse:\n           if isinstance(element, note.Note):\n               notes.append(str(element.pitch))\n           elif isinstance(element, chord.Chord):\n               notes.append('.'.join(str(n) for n in element.normalOrder))\n\n   return notes\n<\/code><\/pre>\n

        This function takes a directory path as input and returns a list of notes or chords extracted from the MIDI files.<\/p>\n<\/li>\n

      3. \n

        Now, we can use the process_midi_files()<\/code> function to preprocess our MIDI dataset:<\/p>\n

        dataset_path = \"path\/to\/dataset\"\nnotes = process_midi_files(dataset_path)\n<\/code><\/pre>\n

        Replace \"path\/to\/dataset\"<\/code> with the path to your MIDI dataset.<\/p>\n<\/li>\n

      4. \n

        It is essential to get an overview of the dataset before proceeding. Let’s print some statistics about the dataset:<\/p>\n

        print(\"Total Notes:\", len(notes))\nprint(\"Unique Notes:\", len(set(notes)))\n<\/code><\/pre>\n

        This will give you the total number of notes and the number of unique notes in your dataset.<\/p>\n<\/li>\n

      5. \n

        Next, we can prepare our input sequences and labels for training. We will use a sliding window technique to create sequences of fixed length from the notes. Additionally, we will map each unique note to a numerical value to facilitate training.<\/p>\n

        sequence_length = 100\n\npitch_names = sorted(set(notes))\nnote_to_int = dict((note, number) for number, note in enumerate(pitch_names))\n\nnetwork_input = []\nnetwork_output = []\n\nfor i in range(0, len(notes) - sequence_length, 1):\n   sequence_in = notes[i:i + sequence_length]\n   sequence_out = notes[i + sequence_length]\n   network_input.append([note_to_int[char] for char in sequence_in])\n   network_output.append(note_to_int[sequence_out])\n\nn_patterns = len(network_input)\n\nnetwork_input = np.reshape(network_input, (n_patterns, sequence_length, 1))\nnetwork_input = network_input \/ float(len(set(notes)))\n\nnetwork_output = np_utils.to_categorical(network_output)\n<\/code><\/pre>\n

        This code snippet creates input sequences of length sequence_length<\/code> and maps each note to a numerical value using the note_to_int<\/code> dictionary. The input sequences are normalized, and the output labels are one-hot encoded for training.<\/p>\n<\/li>\n<\/ol>\n

        At this point, we have preprocessed our MIDI dataset and prepared our input sequences and labels. We can now proceed to the next step of building and training our LSTM model.<\/p>\n

        Building and Training the LSTM Model<\/h2>\n
          \n
        1. Import the necessary libraries:\n
          from keras.models import Sequential\nfrom keras.layers import LSTM, Dropout, Dense, Activation\n<\/code><\/pre>\n

          These libraries provide the functionality required for building and training our LSTM model.<\/p>\n<\/li>\n

        2. \n

          Next, define the structure of the LSTM model:<\/p>\n

          model = Sequential()\nmodel.add(LSTM(\n   512,\n   input_shape=(network_input.shape[1], network_input.shape[2]),\n   return_sequences=True\n))\nmodel.add(Dropout(0.3))\nmodel.add(LSTM(512, return_sequences=True))\nmodel.add(Dropout(0.3))\nmodel.add(LSTM(512))\nmodel.add(Dense(256))\nmodel.add(Dropout(0.3))\nmodel.add(Dense(len(set(notes))))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\n<\/code><\/pre>\n

          This code snippet defines a three-layer LSTM model with dropout regularization. The model is compiled using the categorical cross-entropy loss function and the Adam optimizer.<\/p>\n<\/li>\n

        3. \n

          Once the model structure is defined, we can train the model using our preprocessed data:<\/p>\n

          history = model.fit(network_input, network_output, epochs=200, batch_size=64)\n<\/code><\/pre>\n

          This code snippet trains the model for 200 epochs with a batch size of 64.<\/p>\n<\/li>\n

        4. \n

          After training the model, we can save it to disk for future use:<\/p>\n

          model.save('music_lstm_model.h5')\n<\/code><\/pre>\n

          This will save the trained model to a file named music_lstm_model.h5<\/code>.<\/p>\n<\/li>\n<\/ol>\n

          Congratulations! You have successfully built and trained an LSTM model for music analysis. Now, let’s move on to the final step of generating music using the trained model.<\/p>\n

          Generating Music<\/h2>\n

          To generate music using the trained LSTM model, we need to define a prediction function that predicts the next note given a sequence of notes. We can then use this function to generate a sequence of notes and convert it back to a MIDI file.<\/p>\n

            \n
          1. Import the necessary libraries:\n
            from keras.models import load_model\n\nmodel = load_model('music_lstm_model.h5')\n<\/code><\/pre>\n

            We need to import the load_model()<\/code> function from Keras to load the trained model from the saved file.<\/p>\n<\/li>\n

          2. \n

            Next, define a function to generate new music:<\/p>\n

            def generate_music(model, network_input, pitch_names, sequence_length):\n   start = np.random.randint(0, len(network_input)-1)\n   int_to_note = dict((number, note) for number, note in enumerate(pitch_names))\n\n   pattern = network_input[start]\n   prediction_output = []\n\n   for note_index in range(500):\n       prediction_input = np.reshape(pattern, (1, len(pattern), 1))\n       prediction_input = prediction_input \/ float(len(set(pitch_names)))\n\n       prediction = model.predict(prediction_input, verbose=0)\n\n       index = np.argmax(prediction)\n       result = int_to_note[index]\n       prediction_output.append(result)\n\n       pattern.append(index)\n       pattern = pattern[1:len(pattern)]\n\n   return prediction_output\n<\/code><\/pre>\n

            This function takes the trained model, input sequences, pitch names, and sequence length as input and returns a sequence of predicted notes.<\/p>\n<\/li>\n

          3. \n

            Finally, we can generate music using the trained model:<\/p>\n

            generated_notes = generate_music(model, network_input, pitch_names, sequence_length)\n<\/code><\/pre>\n

            This code snippet generates a sequence of 500 notes using the trained model.<\/p>\n<\/li>\n

          4. \n

            To convert the generated notes back to a MIDI file, we can use the following code:<\/p>\n

            def create_midi_file(notes):\n   offset = 0\n   output_notes = []\n\n   for pattern in notes:\n       if ('.' in pattern) or pattern.isdigit():\n           notes_in_chord = pattern.split('.')\n           chord_notes = []\n\n           for current_note in notes_in_chord:\n               new_note = note.Note(int(current_note))\n               new_note.storedInstrument = instrument.Piano()\n               chord_notes.append(new_note)\n\n           new_chord = chord.Chord(chord_notes)\n           new_chord.offset = offset\n           output_notes.append(new_chord)\n       else:\n           new_note = note.Note(int(pattern))\n           new_note.offset = offset\n           new_note.storedInstrument = instrument.Piano()\n           output_notes.append(new_note)\n\n       offset += 0.5\n\n   midi_stream = stream.Stream(output_notes)\n   midi_stream.write('midi', fp='generated_music.mid')\n<\/code><\/pre>\n

            This code snippet converts a sequence of notes to a music21<\/code> Stream object, which can then be written to a MIDI file.<\/p>\n<\/li>\n

          5. \n

            Finally, let’s generate the MIDI file:<\/p>\n

            create_midi_file(generated_notes)\n<\/code><\/pre>\n

            This will create a file named generated_music.mid<\/code> containing the generated music.<\/p>\n<\/li>\n<\/ol>\n

            That’s it! You have now successfully trained an LSTM model for music analysis and generated new music using the trained model. You can experiment with different dataset, model structure, and hyperparameters to generate music that suits your preferences.<\/p>\n

            Conclusion<\/h2>\n

            In this tutorial, you learned how to use LSTMs for music analysis and generation. We covered the steps involved in preprocessing MIDI data, building and training an LSTM model, and generating new music using the trained model. By applying the techniques covered in this tutorial, you should be able to explore further and generate unique music compositions using deep learning.<\/p>\n","protected":false},"excerpt":{"rendered":"

            How to Use LSTMs for Music Analysis and Generation Introduction Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network (RNN) architecture that is particularly effective at modeling sequences in data. It has been successfully used in many applications, including natural language processing and music generation. In this tutorial, Continue Reading<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[1],"tags":[1297,451,1294,1296,1299,1295,1300,1298,333,1293],"yoast_head":"\nHow to use LLMs for music analysis and generation - Pantherax Blogs<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to use LLMs for music analysis and generation\" \/>\n<meta property=\"og:description\" content=\"How to Use LSTMs for Music Analysis and Generation Introduction Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network (RNN) architecture that is particularly effective at modeling sequences in data. It has been successfully used in many applications, including natural language processing and music generation. In this tutorial, Continue Reading\" \/>\n<meta property=\"og:url\" content=\"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\" \/>\n<meta property=\"og:site_name\" content=\"Pantherax Blogs\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-04T23:14:03+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-05T05:48:01+00:00\" \/>\n<meta name=\"author\" content=\"Panther\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Panther\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\n\t \"@context\": \"https:\/\/schema.org\",\n\t \"@graph\": [\n\t {\n\t \"@type\": \"Article\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/#article\",\n\t \"isPartOf\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\"\n\t },\n\t \"author\": {\n\t \"name\": \"Panther\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7\"\n\t },\n\t \"headline\": \"How to use LLMs for music analysis and generation\",\n\t \"datePublished\": \"2023-11-04T23:14:03+00:00\",\n\t \"dateModified\": \"2023-11-05T05:48:01+00:00\",\n\t \"mainEntityOfPage\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\"\n\t },\n\t \"wordCount\": 942,\n\t \"publisher\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\"\n\t },\n\t \"keywords\": [\n\t \"\\\"artificial intelligence for music\\\"\",\n\t \"\\\"how to use LLMs\\\"\",\n\t \"\\\"LLMs for music generation\\\"\",\n\t \"\\\"machine learning for music\\\"\",\n\t \"\\\"music analysis techniques\\\"\",\n\t \"\\\"music analysis\\\"\",\n\t \"\\\"music creation with LLMs\\\"]\",\n\t \"\\\"music generating models\\\"\",\n\t \"\\\"Music Generation\\\"\",\n\t \"[\\\"LLMs for music analysis\\\"\"\n\t ],\n\t \"inLanguage\": \"en-US\"\n\t },\n\t {\n\t \"@type\": \"WebPage\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\",\n\t \"url\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\",\n\t \"name\": \"How to use LLMs for music analysis and generation - Pantherax Blogs\",\n\t \"isPartOf\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#website\"\n\t },\n\t \"datePublished\": \"2023-11-04T23:14:03+00:00\",\n\t \"dateModified\": \"2023-11-05T05:48:01+00:00\",\n\t \"breadcrumb\": {\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/#breadcrumb\"\n\t },\n\t \"inLanguage\": \"en-US\",\n\t \"potentialAction\": [\n\t {\n\t \"@type\": \"ReadAction\",\n\t \"target\": [\n\t \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/\"\n\t ]\n\t }\n\t ]\n\t },\n\t {\n\t \"@type\": \"BreadcrumbList\",\n\t \"@id\": \"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/#breadcrumb\",\n\t \"itemListElement\": [\n\t {\n\t \"@type\": \"ListItem\",\n\t \"position\": 1,\n\t \"name\": \"Home\",\n\t \"item\": \"http:\/\/localhost:10003\/\"\n\t },\n\t {\n\t \"@type\": \"ListItem\",\n\t \"position\": 2,\n\t \"name\": \"How to use LLMs for music analysis and generation\"\n\t }\n\t ]\n\t },\n\t {\n\t \"@type\": \"WebSite\",\n\t \"@id\": \"http:\/\/localhost:10003\/#website\",\n\t \"url\": \"http:\/\/localhost:10003\/\",\n\t \"name\": \"Pantherax Blogs\",\n\t \"description\": \"\",\n\t \"publisher\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\"\n\t },\n\t \"potentialAction\": [\n\t {\n\t \"@type\": \"SearchAction\",\n\t \"target\": {\n\t \"@type\": \"EntryPoint\",\n\t \"urlTemplate\": \"http:\/\/localhost:10003\/?s={search_term_string}\"\n\t },\n\t \"query-input\": \"required name=search_term_string\"\n\t }\n\t ],\n\t \"inLanguage\": \"en-US\"\n\t },\n\t {\n\t \"@type\": \"Organization\",\n\t \"@id\": \"http:\/\/localhost:10003\/#organization\",\n\t \"name\": \"Pantherax Blogs\",\n\t \"url\": \"http:\/\/localhost:10003\/\",\n\t \"logo\": {\n\t \"@type\": \"ImageObject\",\n\t \"inLanguage\": \"en-US\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/logo\/image\/\",\n\t \"url\": \"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg\",\n\t \"contentUrl\": \"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg\",\n\t \"width\": 1024,\n\t \"height\": 1024,\n\t \"caption\": \"Pantherax Blogs\"\n\t },\n\t \"image\": {\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/logo\/image\/\"\n\t }\n\t },\n\t {\n\t \"@type\": \"Person\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7\",\n\t \"name\": \"Panther\",\n\t \"image\": {\n\t \"@type\": \"ImageObject\",\n\t \"inLanguage\": \"en-US\",\n\t \"@id\": \"http:\/\/localhost:10003\/#\/schema\/person\/image\/\",\n\t \"url\": \"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g\",\n\t \"contentUrl\": \"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g\",\n\t \"caption\": \"Panther\"\n\t },\n\t \"sameAs\": [\n\t \"http:\/\/localhost:10003\"\n\t ],\n\t \"url\": \"http:\/\/localhost:10003\/author\/pepethefrog\/\"\n\t }\n\t ]\n\t}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to use LLMs for music analysis and generation - Pantherax Blogs","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/","og_locale":"en_US","og_type":"article","og_title":"How to use LLMs for music analysis and generation","og_description":"How to Use LSTMs for Music Analysis and Generation Introduction Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network (RNN) architecture that is particularly effective at modeling sequences in data. It has been successfully used in many applications, including natural language processing and music generation. In this tutorial, Continue Reading","og_url":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/","og_site_name":"Pantherax Blogs","article_published_time":"2023-11-04T23:14:03+00:00","article_modified_time":"2023-11-05T05:48:01+00:00","author":"Panther","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Panther","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/#article","isPartOf":{"@id":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/"},"author":{"name":"Panther","@id":"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7"},"headline":"How to use LLMs for music analysis and generation","datePublished":"2023-11-04T23:14:03+00:00","dateModified":"2023-11-05T05:48:01+00:00","mainEntityOfPage":{"@id":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/"},"wordCount":942,"publisher":{"@id":"http:\/\/localhost:10003\/#organization"},"keywords":["\"artificial intelligence for music\"","\"how to use LLMs\"","\"LLMs for music generation\"","\"machine learning for music\"","\"music analysis techniques\"","\"music analysis\"","\"music creation with LLMs\"]","\"music generating models\"","\"Music Generation\"","[\"LLMs for music analysis\""],"inLanguage":"en-US"},{"@type":"WebPage","@id":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/","url":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/","name":"How to use LLMs for music analysis and generation - Pantherax Blogs","isPartOf":{"@id":"http:\/\/localhost:10003\/#website"},"datePublished":"2023-11-04T23:14:03+00:00","dateModified":"2023-11-05T05:48:01+00:00","breadcrumb":{"@id":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/"]}]},{"@type":"BreadcrumbList","@id":"http:\/\/localhost:10003\/how-to-use-llms-for-music-analysis-and-generation\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"http:\/\/localhost:10003\/"},{"@type":"ListItem","position":2,"name":"How to use LLMs for music analysis and generation"}]},{"@type":"WebSite","@id":"http:\/\/localhost:10003\/#website","url":"http:\/\/localhost:10003\/","name":"Pantherax Blogs","description":"","publisher":{"@id":"http:\/\/localhost:10003\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/localhost:10003\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"http:\/\/localhost:10003\/#organization","name":"Pantherax Blogs","url":"http:\/\/localhost:10003\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/localhost:10003\/#\/schema\/logo\/image\/","url":"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg","contentUrl":"http:\/\/localhost:10003\/wp-content\/uploads\/2023\/11\/cropped-9e7721cb-2d62-4f72-ab7f-7d1d8db89226.jpeg","width":1024,"height":1024,"caption":"Pantherax Blogs"},"image":{"@id":"http:\/\/localhost:10003\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"http:\/\/localhost:10003\/#\/schema\/person\/b63d816f4964b163e53cbbcffaa0f3d7","name":"Panther","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"http:\/\/localhost:10003\/#\/schema\/person\/image\/","url":"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g","contentUrl":"http:\/\/2.gravatar.com\/avatar\/b8c0eda5a49f8f31ec32d0a0f9d6f838?s=96&d=mm&r=g","caption":"Panther"},"sameAs":["http:\/\/localhost:10003"],"url":"http:\/\/localhost:10003\/author\/pepethefrog\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4102"}],"collection":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/comments?post=4102"}],"version-history":[{"count":1,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4102\/revisions"}],"predecessor-version":[{"id":4458,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/posts\/4102\/revisions\/4458"}],"wp:attachment":[{"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/media?parent=4102"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/categories?post=4102"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/localhost:10003\/wp-json\/wp\/v2\/tags?post=4102"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}