Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (76)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (14252)

  • how can i use the sox library to process the audio frame decoded by ffmpeg ?

    14 juin 2023, par bishop

    使用sox音频处理库处理经过ffmpeg解码得到的音频帧,

    


         //使用sox处理音频&#xA;            int sox_count = 9;&#xA;            if (StreamType::AUDIO == stream_type_ ) {&#xA;                sox_init();&#xA;&#xA;                sox_signalinfo_t* in_signal = new sox_signalinfo_t();&#xA;                in_signal->rate = frame->sample_rate;&#xA;                in_signal->channels = frame->ch_layout.nb_channels;&#xA;                in_signal->length = frame->nb_samples * in_signal->channels;&#xA;                in_signal->precision = av_get_bytes_per_sample(static_cast<avsampleformat>(frame->format)) * 8;&#xA;&#xA;                 //= new sox_format_t();&#xA;                sox_encodinginfo_t* in_encoding = new sox_encodinginfo_t ();&#xA;                in_encoding->encoding = SOX_ENCODING_SIGN2 ;&#xA;                 sox_format_t* tempFormat = sox_open_mem_read(frame->data[0],&#xA;                                               frame->linesize[0],&#xA;                                               in_signal, in_encoding, "raw");&#xA;&#xA;                //sox_write(tempFormat, reinterpret_cast<const>(frame->data[0]), frame->nb_samples);&#xA;&#xA;                 sox_signalinfo_t* out_signal = in_signal;  // 输出音频的参数与输入音频相同&#xA;                 //out_signal->rate = new_sample_rate;  // 新的采样率,根据需要进行修改&#xA;&#xA;                 sox_encodinginfo_t* out_encoding = in_encoding;  // 输出音频的编码格式与输入音频相同&#xA;&#xA;                 sox_format_t* outputFormat = sox_open_mem_write(frame->data[0],&#xA;                                                                 frame->linesize[0],&#xA;                    out_signal, out_encoding, "raw", nullptr);&#xA;&#xA;                 // 3. 使用SoX处理临时文件中的音频数据&#xA;                sox_effects_chain_t* chain = sox_create_effects_chain(&amp;tempFormat->encoding, &amp;outputFormat->encoding);&#xA;                sox_effect_t* effect;&#xA;                char* args[10];&#xA;&#xA;                effect = sox_create_effect(sox_find_effect("input"));&#xA;                args[0] = (char*)tempFormat; // 变调参数,可以根据需求进行修改&#xA;                assert(sox_effect_options(effect, 1, args) ==SOX_SUCCESS);&#xA;                assert(sox_add_effect(chain, effect, &amp;tempFormat->signal, &amp;tempFormat->signal) == SOX_SUCCESS);&#xA;                free(effect);&#xA;&#xA;&#xA;                effect = sox_create_effect(sox_find_effect("vol"));&#xA;                args[0] = "200dB", assert(sox_effect_options(effect, 1, args) == SOX_SUCCESS);&#xA;                /* Add the effect to the end of the effects processing chain: */&#xA;                assert(sox_add_effect(chain, effect, &amp;tempFormat->signal, &amp;tempFormat->signal) == SOX_SUCCESS);&#xA;                free(effect);&#xA;&#xA;                /*&#xA;                effect = sox_create_effect(sox_find_effect("pitch"));&#xA;                args[0] = {"50.0"}; // 变调参数,可以根据需求进行修改&#xA;                assert(sox_effect_options(effect, 1, args) == SOX_SUCCESS);&#xA;                assert(sox_add_effect(chain, effect, &amp;tempFormat->signal, &amp;outputFormat->signal) == SOX_SUCCESS);&#xA;                free(effect);&#xA;                 */&#xA;&#xA;                effect = sox_create_effect(sox_find_effect("output"));&#xA;                args[0] = (char*)outputFormat; // 变调参数,可以根据需求进行修改&#xA;                assert(sox_effect_options(effect, 1, args) == SOX_SUCCESS);&#xA;                if(sox_add_effect(chain, effect, &amp;tempFormat->signal, &amp;outputFormat->signal) == SOX_SUCCESS) {&#xA;                    std::cout&lt;&lt;"true"&lt;/assert(sox_add_effect(chain, effect, &amp;tempFormat->signal, &amp;outputFormat->signal) == SOX_SUCCESS);&#xA;                free(effect);&#xA;&#xA;                // 4. 处理音频数据&#xA;                sox_flow_effects(chain, nullptr, nullptr);&#xA;&#xA;                fflush((FILE*)outputFormat->fp); &#xA;                memcpy(frame->data[1], frame->data[0], frame->linesize[0]);&#xA;&#xA;                // 释放资源&#xA;                sox_delete_effects_chain(chain);&#xA;                sox_close(tempFormat);&#xA;                sox_close(outputFormat);&#xA;                sox_quit();&#xA;&#xA;            }&#xA;</const></avsampleformat>

    &#xA;

    error :"input : : this handler does not support this data size"

    &#xA;

    when execute in the line of "sox_flow_effects(chain, nullptr, nullptr) ;"

    &#xA;

    how can i fixed this problem ?

    &#xA;

    i have changed the val of "sox_format_t* tempFormat = sox_open_mem_read(frame->data[0],&#xA;frame->linesize[0],&#xA;in_signal, in_encoding, "raw") ;" buffer_size,but also not right.

    &#xA;

  • Decoded YUV shows green edge when rendered with OpenGL

    2 février 2023, par Alex

    Any idea, why decoded YUV -> RGB(shader conversion) has that extra green edge on the right side ?
    &#xA;Almost any 1080X1920 video seems to have this issue.

    &#xA;

    enter image description here

    &#xA;

    A screen recording of the issue is uploaded here https://imgur.com/a/JtUZq4h

    &#xA;

    Once I manually scale up the texture width, I can see it fills up to the viewport, but it would be nice if I could fix the actual cause. Is it some padding that's part of YUV colorspace ? What else could it be ?

    &#xA;

    My model is -1 to 1, filling the entire width
    &#xA;The texture coordinates are also 0 to 1 ratio

    &#xA;

    float vertices[] = {&#xA;    -1.0, 1.0f, 0.0f, 0.0,     // top left&#xA;     1.0f, 1.0f, 1.0, 0.0,      // top right&#xA;    -1.0f, -1.0f, 0.0f, 1.0f,  // bottom left&#xA;     1.0f, -1.0f, 1.0f, 1.0f    // bottom right&#xA;};&#xA;

    &#xA;

    Fragment Shader

    &#xA;

    #version 330 core&#xA;&#xA;in vec2 TexCoord;&#xA;&#xA;out vec4 FragColor;&#xA;precision highp float;&#xA;uniform sampler2D textureY;&#xA;uniform sampler2D textureU;&#xA;uniform sampler2D textureV;&#xA;uniform float alpha;&#xA;uniform vec2 texScale;&#xA;&#xA;&#xA;void main()&#xA;{&#xA;    float y = texture(textureY, TexCoord / texScale).r;&#xA;    float u = texture(textureU, TexCoord / texScale).r - 0.5; &#xA;    float v = texture(textureV, TexCoord / texScale).r - 0.5;&#xA;    &#xA;    vec3 rgb;&#xA;    &#xA;    //yuv - 709&#xA;    rgb.r = clamp(y &#x2B; (1.402 * v), 0, 255);&#xA;    rgb.g = clamp(y - (0.2126 * 1.5748 / 0.7152) * u - (0.0722 * 1.8556 / 0.7152) * v, 0, 255);&#xA;    rgb.b = clamp(y &#x2B; (1.8556 * u), 0,255);&#xA;&#xA;    FragColor = vec4(rgb, 1.0);&#xA;} &#xA;

    &#xA;

    Texture Class

    &#xA;

    class VideoTexture {&#xA;   public:&#xA;    VideoTexture(Decoder *dec) : decoder(dec) {&#xA;        glGenTextures(1, &amp;texture1);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);&#xA;        glBindTexture(GL_TEXTURE_2D, texture1);&#xA;        glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, decoder->frameQueue.first().linesize[0], decoder->frameQueue.first().height, 0, format, GL_UNSIGNED_BYTE, 0);&#xA;        glGenerateMipmap(GL_TEXTURE_2D);&#xA;&#xA;        glGenTextures(1, &amp;texture2);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);&#xA;        glBindTexture(GL_TEXTURE_2D, texture2);&#xA;        glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, decoder->frameQueue.first().linesize[1], decoder->frameQueue.first().height / 2, 0, format, GL_UNSIGNED_BYTE, 0);&#xA;        glGenerateMipmap(GL_TEXTURE_2D);&#xA;&#xA;        glGenTextures(1, &amp;texture3);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);&#xA;        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);&#xA;        glBindTexture(GL_TEXTURE_2D, texture3);&#xA;        glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, decoder->frameQueue.first().linesize[2], decoder->frameQueue.first().height / 2, 0, format, GL_UNSIGNED_BYTE, 0);&#xA;        glGenerateMipmap(GL_TEXTURE_2D);&#xA;    }&#xA;&#xA;    void Render(Shader *shader, Gui *gui) {&#xA;        if (decoder->frameQueue.isEmpty()) {&#xA;            return;&#xA;        }&#xA;&#xA;        glActiveTexture(GL_TEXTURE0);&#xA;        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, decoder->frameQueue.first().linesize[0], decoder->frameQueue.first().height, format, GL_UNSIGNED_BYTE, decoder->frameQueue.at(currentFrame).data[0]);&#xA;        glBindTexture(GL_TEXTURE_2D, texture1);&#xA;        shader->setInt("textureY", 0);&#xA;&#xA;        glActiveTexture(GL_TEXTURE1);&#xA;        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, decoder->frameQueue.first().linesize[1], decoder->frameQueue.first().height / 2, format, GL_UNSIGNED_BYTE, decoder->frameQueue.at(currentFrame).data[1]);&#xA;        glBindTexture(GL_TEXTURE_2D, texture2);&#xA;        shader->setInt("textureU", 1);&#xA;&#xA;        glActiveTexture(GL_TEXTURE2);&#xA;        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, decoder->frameQueue.first().linesize[2], decoder->frameQueue.first().height / 2, format, GL_UNSIGNED_BYTE, decoder->frameQueue.at(currentFrame).data[2]);&#xA;        glBindTexture(GL_TEXTURE_2D, texture3);&#xA;        shader->setInt("textureV", 2);&#xA;    }&#xA;&#xA;    ~VideoTexture() {&#xA;        printf("\nVideo texture destructor");&#xA;        glDeleteTextures(1, &amp;texture1);&#xA;        glDeleteTextures(1, &amp;texture2);&#xA;        glDeleteTextures(1, &amp;texture3);&#xA;    }&#xA;&#xA;   private:&#xA;    GLuint texture1;&#xA;    GLuint texture2;&#xA;    GLuint texture3;&#xA;    GLint internalFormat = GL_RG8;&#xA;    GLint format = GL_RED;&#xA;    int currentFrame = 0;&#xA;    Decoder *decoder;&#xA;}&#xA;

    &#xA;

  • How do I sync 4 videos in a grid to play the same frame at the same time ?

    28 décembre 2022, par PirateApp
      &#xA;
    • 4 of us have recorded ourselves playing a game and want to create a 4 x 4 video grid
    • &#xA;

    • The game has cutscenes at the beginning followed by each person having their unique part for the rest of the video
    • &#xA;

    • I am looking to synchronize the grid such that it starts at the same place in the cutscene for everyone
    • &#xA;

    • Kindly take a look at what is happening currently. The cutscene is off by a few seconds for everyone
    • &#xA;

    • Imagine a time offset a,b,c,d such that when I add this offet to each video, the entire video grid will be in sync
    • &#xA;

    • How to find this a,b,c,d and more importantly how to add it in filter_complex
    • &#xA;

    &#xA;

    I used the ffmpeg command below to generate a 4 x 4 video grid and it seems to work

    &#xA;

    ffmpeg&#xA;    -i nano_prologue.mkv -i macko_nimble_guardian.mkv -i nano_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4&#xA;    -filter_complex "&#xA;        nullsrc=size=1920x1080 [base];&#xA;        [0:v] setpts=PTS-STARTPTS, scale=960x540 [upperleft];&#xA;        [1:v] setpts=PTS-STARTPTS, scale=960x540 [upperright];&#xA;        [2:v] setpts=PTS-STARTPTS, scale=960x540 [lowerleft];&#xA;        [3:v] setpts=PTS-STARTPTS, scale=960x540 [lowerright];&#xA;        [base][upperleft] overlay=shortest=1 [tmp1];&#xA;        [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];&#xA;        [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];&#xA;        [tmp3][lowerright] overlay=shortest=1:x=960:y=540&#xA;    "&#xA;    -c:v libx264 output.mkv&#xA;

    &#xA;

    My problem though is that since each of us starts recording at slightly different times, the cutscenes are out of sync

    &#xA;

    As per the screenshot below, you can see that each video has the same scene starting at a slightly different time.

    &#xA;

    Is there a way to find where the same frame will start on all videos and then sync each video to start from that frame or 20 seconds before that frame ?

    &#xA;

    enter image description here

    &#xA;

    UPDATE 1

    &#xA;

    i have figured out the offset for each video in millisecond precision using the following technique

    &#xA;

    take a screenshot of the first video at a particular point in the cutscene and save image as png and run the script below for the remaining 3 videos to find out where this screenshot appears in each video&#xA;&#xA;&#xA;ffmpeg -i "video2.mp4" -r 1 -loop 1 -i screenshot.png -an -filter_complex "blend=difference:shortest=1,blackframe=90:32" -f null -&#xA;

    &#xA;

    Use the command above to search for the offset in every video for that cutscene

    &#xA;

    It gave me this

    &#xA;

    VIDEO 3 OFFSET

    &#xA;

    [Parsed_blackframe_1 @ 0x600003af00b0] frame:3144 pblack:92 pts:804861 t:52.399805 type:P last_keyframe:3120&#xA;&#xA;[Parsed_blackframe_1 @ 0x600003af00b0] frame:3145 pblack:96 pts:805117 t:52.416471 type:P last_keyframe:3120&#xA;

    &#xA;

    VIDEO 2 OFFSET

    &#xA;

    [Parsed_blackframe_1 @ 0x6000014dc0b0] frame:3629 pblack:91 pts:60483 t:60.483000 type:P last_keyframe:3500&#xA;

    &#xA;

    VIDEO 4 OFFSET

    &#xA;

    [Parsed_blackframe_1 @ 0x600002f84160] frame:2885 pblack:93 pts:48083 t:48.083000 type:P last_keyframe:2880&#xA;&#xA;[Parsed_blackframe_1 @ 0x600002f84160] frame:2886 pblack:96 pts:48100 t:48.100000 type:P last_keyframe:2880&#xA;

    &#xA;

    Now how do I use filter_complex to say start each video at either the frame above or the timestamp above ?. I would like to include say 10 seconds before the above frame in each video so that it starts from the beginning

    &#xA;

    UPDATE 2

    &#xA;

    This command currently gives me a 100% synced video, how do I make it start 15 seconds before the specified frame numbers and how to make it use the audio track from video 2 instead ?

    &#xA;

    ffmpeg&#xA;    -i v_nimble_guardian.mkv -i macko_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4 -i nano_nimble_guardian.mkv&#xA;    -filter_complex "&#xA;        nullsrc=size=1920x1080 [base];&#xA;        [0:v] trim=start_pts=49117,setpts=PTS-STARTPTS, scale=960x540 [upperleft];&#xA;        [1:v] trim=start_pts=50483,setpts=PTS-STARTPTS, scale=960x540 [upperright];&#xA;        [2:v] trim=start_pts=795117,setpts=PTS-STARTPTS, scale=960x540 [lowerleft];&#xA;        [3:v] trim=start_pts=38100,setpts=PTS-STARTPTS, scale=960x540 [lowerright];&#xA;        [base][upperleft] overlay=shortest=1 [tmp1];&#xA;        [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];&#xA;        [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];&#xA;        [tmp3][lowerright] overlay=shortest=1:x=960:y=540&#xA;    "&#xA;    -c:v libx264 output.mkv&#xA;

    &#xA;