Recherche avancée

Médias (1)

Mot : - Tags -/géodiversité

Autres articles (111)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (7038)

  • Problems with outputting stream format as RTMP about FFmpeg C-API

    27 novembre 2023, par dongrixinyu

    I am using FFmpeg's C API to push video streams rtmp://.... into an SRS server.
    
The input stream is an MP4 file named juren-30s.mp4.
    
The output stream is also an MP4 file named juren-30s-5.mp4.

    


    My piece of code (see further down) works fine when used in the following steps :
    
mp4 -> demux -> decode -> rgb images -> encode -> mux -> mp4.

    


    Problem :

    


    When I changed the output stream to an online RTMP url named rtmp://ip:port/live/stream_nb_23 (just an example, you can change it according to your server and rules.)

    


    result : This code would be corrupted mp4 -> rtmp(flv).

    


    What I've tried :

    


    Changing the output format
    
I changed the output format param to become flv when I initialized the avformat_alloc_output_context2. But this didn't help.

    


    Debugging the output
    
When I executed ffprobe rtmp://ip:port/live/xxxxxxx, I got the following errors and did not know why :

    


    [h264 @ 0x55a925e3ba80] luma_log2_weight_denom 12 is out of range
[h264 @ 0x55a925e3ba80] Missing reference picture, default is 2
[h264 @ 0x55a925e3ba80] concealing 8003 DC, 8003 AC, 8003 MV errors in P frame
[h264 @ 0x55a925e3ba80] QP 4294966938 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] no frame!
[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 21 is out of range
[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 10 is out of range
[h264 @ 0x55a925e3ba80] chroma_log2_weight_denom 12 is out of range
[h264 @ 0x55a925e3ba80] Missing reference picture, default is 0
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] QP 4294967066 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] no frame!
[h264 @ 0x55a925e3ba80] QP 341 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error


    


    I am confused about the difference between MP4 and RTMP of how to use FFmpeg C-API to produce a correct output stream format.

    


    Besides, I also wanna learn how to convert video and audio streams into other formats using FFmpeg C-api, such as flv, ts, rtsp, etc.

    


    Code to reproduce the problem :

    


    


    So how to make this code output to RTMP without getting issue of an unplayable video ?

    


    #include 
#include "libavformat/avformat.h"
int main()
{
    int ret = 0; int err;

    //Open input file
    char filename[] = "juren-30s.mp4";
    AVFormatContext *fmt_ctx = avformat_alloc_context();
    if (!fmt_ctx) {
        printf("error code %d \n",AVERROR(ENOMEM));
        return ENOMEM;
    }
    if((err = avformat_open_input(&fmt_ctx, filename,NULL,NULL)) < 0){
        printf("can not open file %d \n",err);
        return err;
    }

    //Open the decoder
    AVCodecContext *avctx = avcodec_alloc_context3(NULL);
    ret = avcodec_parameters_to_context(avctx, fmt_ctx->streams[0]->codecpar);
    if (ret < 0){
        printf("error code %d \n",ret);
        return ret;
    }
    AVCodec *codec = avcodec_find_decoder(avctx->codec_id);
    if ((ret = avcodec_open2(avctx, codec, NULL)) < 0) {
        printf("open codec faile %d \n",ret);
        return ret;
    }

    //Open the output file container
    char filename_out[] = "juren-30s-5.mp4";
    AVFormatContext *fmt_ctx_out = NULL;
    err = avformat_alloc_output_context2(&fmt_ctx_out, NULL, NULL, filename_out);
    if (!fmt_ctx_out) {
        printf("error code %d \n",AVERROR(ENOMEM));
        return ENOMEM;
    }
    //Add all the way to the container context
    AVStream *st = avformat_new_stream(fmt_ctx_out, NULL);
    st->time_base = fmt_ctx->streams[0]->time_base;

    AVCodecContext *enc_ctx = NULL;
    
    AVPacket *pt = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();
    AVPacket *pkt_out = av_packet_alloc();

    int frame_num = 0; int read_end = 0;
    
    for(;;){
        if( 1 == read_end ){ break;}

        ret = av_read_frame(fmt_ctx, pkt);
        //Skip and do not process audio packets
        if( 1 == pkt->stream_index ){
            av_packet_unref(pt);
            continue;
        }

        if ( AVERROR_EOF == ret) {
            //After reading the file, the data and size of pkt should be null at this time
            avcodec_send_packet(avctx, NULL);
        }else {
            if( 0 != ret){
                printf("read error code %d \n",ret);
                return ENOMEM;
            }else{
                retry:
                if (avcodec_send_packet(avctx, pkt) == AVERROR(EAGAIN)) {
                    printf("Receive_frame and send_packet both returned EAGAIN, which is an API violation.\n");
                    //Here you can consider sleeping for 0.1 seconds and returning EAGAIN. This is usually because there is a bug in ffmpeg's internal API.
                    goto retry;
                }
                //Release the encoded data in pkt
                av_packet_unref(pt);
            }

        }

        //The loop keeps reading data from the decoder until there is no more data to read.
        for(;;){
            //Read AVFrame
            ret = avcodec_receive_frame(avctx, frame);
            /* Release the YUV data in the frame,
             * Since av_frame_unref is called in the avcodec_receive_frame function, the following code can be commented.
             * So we don't need to manually unref this AVFrame
             * */
            //off_frame_unref(frame);

            if( AVERROR(EAGAIN) == ret ){
                //Prompt EAGAIN means the decoder needs more AVPackets
                //Jump out of the first layer of for and let the decoder get more AVPackets
                break;
            }else if( AVERROR_EOF == ret ){
                /* The prompt AVERROR_EOF means that an AVPacket with both data and size NULL has been sent to the decoder before.
                 * Sending NULL AVPacket prompts the decoder to flush out all cached frames.
                 * Usually a NULL AVPacket is sent only after reading the input file, or when another video stream needs to be decoded with an existing decoder.
                 *
                 * */

                /* Send null AVFrame to the encoder and let the encoder flush out the remaining data.
                 * */
                ret = avcodec_send_frame(enc_ctx, NULL);
                for(;;){
                    ret = avcodec_receive_packet(enc_ctx, pkt_out);
                    //It is impossible to return EAGAIN here, if there is any, exit directly.
                    if (ret == AVERROR(EAGAIN)){
                        printf("avcodec_receive_packet error code %d \n",ret);
                        return ret;
                    }
                    
                    if ( AVERROR_EOF == ret ){ break; }
                    
                    //Encode the AVPacket, print some information first, and then write it to the file.
                    printf("pkt_out size : %d \n",pkt_out->size);
                    //Set the stream_index of AVPacket so that you know which stream it is.
                    pkt_out->stream_index = st->index;
                    //Convert the time base of AVPacket to the time base of the output stream.
                    pkt_out->pts = av_rescale_q_rnd(pkt_out->pts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->dts = av_rescale_q_rnd(pkt_out->dts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->duration = av_rescale_q_rnd(pkt_out->duration, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);


                    ret = av_interleaved_write_frame(fmt_ctx_out, pkt_out);
                    if (ret < 0) {
                        printf("av_interleaved_write_frame faile %d \n",ret);
                        return ret;
                    }
                    av_packet_unref(pt_out);
                }
                av_write_trailer(fmt_ctx_out);
                //Jump out of the second layer of for, the file has been decoded.
                read_end = 1;
                break;
            }else if( ret >= 0 ){
                //Only when a frame is decoded can the encoder be initialized.
                if( NULL == enc_ctx ){
                    //Open the encoder and set encoding information.
                    AVCodec *encode = avcodec_find_encoder(AV_CODEC_ID_H264);
                    enc_ctx = avcodec_alloc_context3(encode);
                    enc_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
                    enc_ctx->bit_rate = 400000;
                    enc_ctx->framerate = avctx->framerate;
                    enc_ctx->gop_size = 30;
                    enc_ctx->max_b_frames = 10;
                    enc_ctx->profile = FF_PROFILE_H264_MAIN;
                   
                    /*
                     * In fact, the following information is also available in the container. You can also open the encoder directly in the container at the beginning.
                     * I took these encoder parameters from AVFrame because the difference in the container is final.
                     * Because the AVFrame you decoded may go through a filter, the information will be transformed after the filter, but this article does not use filters.
                     */
                     
                    //The time base of the encoder should be the time base of AVFrame, because AVFrame is the input. The time base of AVFrame is the time base of the stream.
                    enc_ctx->time_base = fmt_ctx->streams[0]->time_base;
                    enc_ctx->width = fmt_ctx->streams[0]->codecpar->width;
                    enc_ctx->height = fmt_ctx->streams[0]->codecpar->height;
                    enc_ctx->sample_aspect_ratio = st->sample_aspect_ratio = frame->sample_aspect_ratio;
                    enc_ctx->pix_fmt = frame->format;
                    enc_ctx->color_range            = frame->color_range;
                    enc_ctx->color_primaries        = frame->color_primaries;
                    enc_ctx->color_trc              = frame->color_trc;
                    enc_ctx->colorspace             = frame->colorspace;
                    enc_ctx->chroma_sample_location = frame->chroma_location;

                    /* Note that the value of this field_order is different for different videos. I have written it here.
                     * Because the video in this article is AV_FIELD_PROGRESSIVE
                     * The production environment needs to process different videos
                     */
                    enc_ctx->field_order = AV_FIELD_PROGRESSIVE;

                    /* Now we need to copy the encoder parameters to the stream. When decoding, assign parameters from the stream to the decoder.
                     * Now let’s do it in reverse.
                     * */
                    ret = avcodec_parameters_from_context(st->codecpar,enc_ctx);
                    if (ret < 0){
                        printf("error code %d \n",ret);
                        return ret;
                    }
                    if ((ret = avcodec_open2(enc_ctx, encode, NULL)) < 0) {
                        printf("open codec faile %d \n",ret);
                        return ret;
                    }

                    //Formally open the output file
                    if ((ret = avio_open2(&fmt_ctx_out->pb, filename_out, AVIO_FLAG_WRITE,&fmt_ctx_out->interrupt_callback,NULL)) < 0) {
                        printf("avio_open2 fail %d \n",ret);
                        return ret;
                    }

                    //Write the file header first.
                    ret = avformat_write_header(fmt_ctx_out,NULL);
                    if (ret < 0) {
                        printf("avformat_write_header fail %d \n",ret);
                        return ret;
                    }

                }

                //Send AVFrame to the encoder, and then continuously read AVPacket
                ret = avcodec_send_frame(enc_ctx, frame);
                if (ret < 0) {
                    printf("avcodec_send_frame fail %d \n",ret);
                    return ret;
                }
                for(;;){
                    ret = avcodec_receive_packet(enc_ctx, pkt_out);
                    if (ret == AVERROR(EAGAIN)){ break; }
                    
                    if (ret < 0){
                    printf("avcodec_receive_packet fail %d \n",ret);
                    return ret;
                    }
                    
                    //Encode the AVPacket, print some information first, and then write it to the file.
                    printf("pkt_out size : %d \n",pkt_out->size);

                    //Set the stream_index of AVPacket so that you know which stream it is.
                    pkt_out->stream_index = st->index;
                    
                    //Convert the time base of AVPacket to the time base of the output stream.
                    pkt_out->pts = av_rescale_q_rnd(pkt_out->pts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->dts = av_rescale_q_rnd(pkt_out->dts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->duration = av_rescale_q_rnd(pkt_out->duration, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

                    ret = av_interleaved_write_frame(fmt_ctx_out, pkt_out);
                    if (ret < 0) {
                        printf("av_interleaved_write_frame faile %d \n",ret);
                        return ret;
                    }
                    av_packet_unref(pt_out);
                }

            }
            else{ printf("other fail \n"); return ret;}
        }
    }
    
    av_frame_free(&frame); av_packet_free(&pt); av_packet_free(&pkt_out);
    
    //Close the encoder and decoder.
    avcodec_close(avctx); avcodec_close(enc_ctx);

    //Release container memory.
    avformat_free_context(fmt_ctx);

    //Must adjust avio_closep, otherwise the data may not be written in, it will be 0kb
    avio_closep(&fmt_ctx_out->pb);
    avformat_free_context(fmt_ctx_out);
    printf("done \n");

    return 0;
}


    


  • OPensuse does not reboot after installing I install gnutls

    5 septembre 2021, par Harry Boy

    I am trying to build ffmpeg on OpenSuse with the following flags :

    


    ffmpeg-3.4.8 # ./configure --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --incdir=/usr/include/ffmpeg --extra-cflags='-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g' --optflags='-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g' --disable-htmlpages --enable-pic --disable-stripping --enable-shared --disable-static --enable-gpl --disable-openssl --enable-avresample --enable-libcdio --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libcelt --enable-libcdio --enable-libdc1394 --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libzimg --enable-libzvbi --enable-vaapi --enable-vdpau --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-libx264 --enable-libx265 --enable-libxvid


    


    I get the forrowing error :

    


    ERROR: gnutls not found using pkg-config

If you think configure made a mistake, make sure you are using the latest
version from Git.  If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "ffbuild/config.log" produced by configure as this will help
solve the problem.


    


    I then install gnutls from this link :
https://software.opensuse.org/package/gnutls

    


    zypper addrepo https://download.opensuse.org/repositories/openSUSE:Leap:15.2:Update/standard/openSUSE:Leap:15.2:Update.repo

zypper refresh
zypper install gnutls
Loading repository data...
Reading installed packages...
Resolving package dependencies...

Problem: nothing provides 'this-is-only-for-build-envs' needed by the to be installed libunbound-devel-mini-1.6.8-lp152.9.3.1.x86_64
Solution 1: do not install gnutls-3.6.7-lp152.9.12.1.x86_64
Solution 2: break libunbound-devel-mini-1.6.8-lp152.9.3.1.x86_64 by ignoring some of its dependencies

Choose from above solutions by number or cancel [1/2/c/d/?] (c): 2
Resolving dependencies...
Resolving package dependencies...

The following 4 NEW packages are going to be installed:
 gnutls libgnutls-dane0 libopts25 libunbound-devel-mini

The following package requires a system reboot:
gnutls

4 new packages to install.
Overall download size: 1.1 MiB. Already cached: 0 B. After the operation, additional 4.0 MiB will be used.

Note: System reboot required.
Continue? [y/n/v/...? shows all options] (y): y
Retrieving package libopts25-5.18.12-lp152.4.3.1.x86_64                                                                                     
(1/4),  65.6 KiB (134.5 KiB unpacked)
Retrieving: libopts25-5.18.12-lp152.4.3.1.x86_64.rpm ......................................................................................................................[done]
Retrieving package libunbound-devel-mini-1.6.8-lp152.9.3.1.x86_64                                                                           (2/4), 331.5 KiB (723.4 KiB unpacked)
Retrieving: libunbound-devel-mini-1.6.8-lp152.9.3.1.x86_64.rpm ............................................................................................................[done]
Retrieving package libgnutls-dane0-3.6.7-lp152.9.12.1.x86_64                                                                                (3/4),  91.7 KiB ( 34.6 KiB unpacked)
Retrieving: libgnutls-dane0-3.6.7-lp152.9.12.1.x86_64.rpm .................................................................................................................[done]
Retrieving package gnutls-3.6.7-lp152.9.12.1.x86_64                                                                                         (4/4), 675.4 KiB (  3.1 MiB unpacked)
Retrieving: gnutls-3.6.7-lp152.9.12.1.x86_64.rpm ..............................................................................................................[done (3.6 MiB/s)]

Checking for file conflicts: ..............................................................................................................................................[done]
(1/4) Installing: libopts25-5.18.12-lp152.4.3.1.x86_64 ....................................................................................................................[done]
(2/4) Installing: libunbound-devel-mini-1.6.8-lp152.9.3.1.x86_64 ..........................................................................................................[done]
(3/4) Installing: libgnutls-dane0-3.6.7-lp152.9.12.1.x86_64 ...............................................................................................................[done]
(4/4) Installing: gnutls-3.6.7-lp152.9.12.1.x86_64 ........................................................................................................................[done]


    


    Note the system reboot hint.
When I reboot my system I it just boots to a black screen. It does not display the logon prompt.
Any ideas as to why this is or how I can debug it ?

    


  • Why is my .mp4 file created using cv2.VideoWriter not syncing up with the audio when i combine the video and audio using ffmpeg [closed]

    27 décembre 2024, par joeS125

    The aim of the script is to take text from a text file and put it onto a stock video with an ai reading the text. Similar to those reddit stories on social media with parkour minecraft in the background.

    


    import cv2
import time
from ffpyplayer.player import MediaPlayer
from Transcription import newTranscribeAudio
from pydub import AudioSegment

#get a gpt text generation to create a story based on a prompt, for example sci-fi story and spread it over 3-4 parts
#get stock footage, like minecraft parkour etc
#write text of script on the footage
#create video for each part
#have ai voiceover to read the transcript
cap = cv2.VideoCapture("Stock_Videos\Minecraft_Parkour.mp4")
transcription = newTranscribeAudio("final_us.wav")
player = MediaPlayer("final_us.mp3")
audio = AudioSegment.from_file("final_us.mp3")
story = open("Story.txt", "r").read()
story_split = story.split("||")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
video_duration = frame_count / fps  # Duration of one loop of the video
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
audio_duration = len(audio) / 1000  # Duration in seconds
video_writer = cv2.VideoWriter(f"CompletedVideo.mp4", fourcc, fps, (1080, 1920))

choice = 0#part of the story choice
part_split = story_split[choice].split(" ")
with open("Segment.txt", "w") as file:
    file.write(story_split[choice])
start_time = time.time()
length = len(part_split) - 1
next_text = []
for j in range(0, length):
    temp = part_split[j].replace("\n", "")
    next_text.append([temp])
index = 0
word_index = 0
frame_size_x = 1080
frame_size_y = 1920
audio_duration = len(audio) / 1000  # Duration in seconds
start_time = time.time()
wait_time = 1 / fps
while (time.time() - start_time) < audio_duration:
    cap.set(cv2.CAP_PROP_POS_FRAMES, 0)  # Restart video
    elapsed_time = time.time() - start_time
    print(video_writer)
    if index >= len(transcription):
        break
    while cap.isOpened():
        # Capture frames in the video 
        ret, frame = cap.read()
        if not ret:
            break
        audio_frame, val = player.get_frame()
        if val == 'eof':  # End of file
            print("Audio playback finished.")
            break
        if index >= len(transcription):
            break
        
        if frame_size_x == -1:
            frame_size_x = frame.shape[1]
            frame_size_y = frame.shape[0]

        elapsed_time = time.time() - start_time

        # describe the type of font 
        # to be used. 
        font = cv2.FONT_HERSHEY_SIMPLEX 
        trans = transcription[index]["words"]
        end_time = trans[word_index]["end"]
        if trans[word_index]["start"] < elapsed_time < trans[word_index]["end"]:
            video_text = trans[word_index]["text"]
        elif elapsed_time >= trans[word_index]["end"]:
            #index += 1
            word_index += 1
        if (word_index >= len(trans)):
            index += 1
            word_index = 0
        # get boundary of this text
        textsize = cv2.getTextSize(video_text, font, 3, 6)[0]
        # get coords based on boundary
        textX = int((frame.shape[1] - textsize[0]) / 2)
        textY = int((frame.shape[0] + textsize[1]) / 2)
        
        cv2.putText(frame,  
                    video_text,  
                    (textX, textY),  
                    font, 3,  
                    (0, 255, 255),  
                    6,  
                    cv2.LINE_4)
        
        # Define the resize scale
        scale_percent = 50  # Resize to 50% of the original size
        # Get new dimensions
        width = 1080
        height = 1920
        new_size = (width, height)

        # Resize the frame
        resized_frame = cv2.resize(frame, new_size)
        video_writer.write(resized_frame)
        cv2.imshow('video', resized_frame)
        cv2.waitKey(wait_time)
        if cv2.waitKey(1) & 0xFF == ord('q'): 
            break
cv2.destroyAllWindows()
video_writer.release()
cap.release()



    


    When I run this script the audio matches the text in the video perfectly and it runs for the correct amount of time to match with the audio (2 min 44 sec). However, the saved video CompletedVideo.mp4 only lasts for 1 min 10 sec. I am unsure why the video has sped up. The fps is 60 fps. If you require any more information please let me know and thanks in advance.

    


    I have tried changing the fps, changing the wait_time after writing each frame. I am expecting the CompletedVideo.mp4 to be 2 min 44 sec long not 1 min 10 sec long.