Recherche avancée

Médias (91)

Autres articles (69)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (9234)

  • How to publish selfmade stream with ffmpeg and c++ to rtmp server ?

    10 janvier 2024, par rLino

    Have a nice day to you, people !

    



    I am writing an application for Windows that will capture the screen and send the stream to Wowza server by rtmp (for broadcasting). My application use ffmpeg and Qt.
I capture the screen with WinApi, convert a buffer to YUV444(because it's simplest) and encode frame as described at the file decoding_encoding.c (from FFmpeg examples) :

    



    ///////////////////////////
//Encoder initialization
///////////////////////////
avcodec_register_all();
codec=avcodec_find_encoder(AV_CODEC_ID_H264);
c = avcodec_alloc_context3(codec);
c->width=scr_width;
c->height=scr_height;
c->bit_rate = 400000;
int base_num=1;
int base_den=1;//for one frame per second
c->time_base= (AVRational){base_num,base_den};
c->gop_size = 10;
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV444P;
av_opt_set(c->priv_data, "preset", "slow", 0);

frame = avcodec_alloc_frame();
frame->format = c->pix_fmt;
frame->width  = c->width;
frame->height = c->height;

for(int counter=0;counter<10;counter++)
{
///////////////////////////
//Capturing Screen
///////////////////////////
    GetCapScr(shotbuf,scr_width,scr_height);//result: shotbuf is filled by screendata from HBITMAP
///////////////////////////
//Convert buffer to YUV444 (standard formula)
//It's handmade function because of problems with prepare buffer to swscale from HBITMAP
///////////////////////////
    RGBtoYUV(shotbuf,frame->linesize,frame->data,scr_width,scr_height);//result in frame->data
///////////////////////////
//Encode Screenshot
///////////////////////////
    av_init_packet(&pkt);
    pkt.data = NULL;    // packet data will be allocated by the encoder
    pkt.size = 0;
    frame->pts = counter;
    avcodec_encode_video2(c, &pkt, frame, &got_output);
    if (got_output) 
    {
        //I think that  sending packet by rtmp  must be here!
        av_free_packet(&pkt);             

    }

}
// Get the delayed frames
for (int got_output = 1,i=0; got_output; i++)
{
    ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
    if (ret < 0)
        {
            fprintf(stderr, "Error encoding frame\n");
            exit(1);
        }
        if (got_output)
        {
        //I think that  sending packet by rtmp  must be here!
        av_free_packet(&pkt);      
        }
}

///////////////////////////
//Deinitialize encoder
///////////////////////////
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
avcodec_free_frame(&frame);


    



    I need to send video stream generated by this code to RTMP server.
In other words, I need c++/c analog for this command :

    



    ffmpeg -re -i "sample.h264" -f flv rtmp://sample.url.com/screen/test_stream


    



    It's useful, but I don't want to save stream to file, I want to use ffmpeg libraries for realtime encoding screen capture and sending encoded frames to RTMP server inside my own application.
Please give me a little example how to initialize AVFormatContext properly and to send my encoded video AVPackets to server.

    



    Thanks.

    


  • Android h264 decode non-existing PPS 0 referenced

    8 juin 2023, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data !. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get !

    



    Code for Encoding

    



    codecinit()- called once(JNI wrapper function)

    



    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
avcodec_register_all();
codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
if(codec->id == AV_CODEC_ID_H264)
    __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");

if (!codec) {
    fprintf(stderr, "codec not found\n");
    __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");

}
    __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
c= avcodec_alloc_context3(codec);
if(c == NULL)
    __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

picture= av_frame_alloc();

if(picture == NULL)
    __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");

c->bit_rate = 400000;
c->height = 800;
c->time_base= (AVRational){1,25};
c->gop_size = 10; 
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV420P;
outbuf_size = 768000;
c->width = 480;

size = (c->width * c->height);

if (avcodec_open2(c, codec,NULL) < 0) {

__android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");


}

ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                     c->pix_fmt, 32);
if (ret < 0) {
        __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
    fprintf(stderr, "could not alloc raw picture buffer\n");

}

picture->format = c->pix_fmt;
picture->width  = c->width;
picture->height = c->height;
return 0;

}


    



    encodeframe()-called in a while loop

    



    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
Output = (char *)temp;
const uint8_t * const inData[1] = { Output }; 
const int inLinesize[1] = { 2*c->width };

//swscale should implement here

    av_init_packet(&pkt);
    pkt.data = NULL;    // packet data will be allocated by the encoder
    pkt.size = 0;

    fflush(stdout);
picture->data[0] = Output;
ret = avcodec_encode_video2(c, &pkt, picture,&got_output);

    fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
     if (ret < 0) {
                __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
        if(got_output > 0)
        __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");

    }

    if (got_output) {
        fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
        fprintf(stderr,"before caling decode");
        decode_inline(&pkt); //function that decodes right after the encode
        fprintf(stderr,"after caling decode");


        av_free_packet(&pkt);
    }


fprintf(stderr,"y val: %d \n",y);


(*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
return ((ret));
}


    



    decode_inline() function

    



    decode_inline(AVPacket *avpkt){
AVCodec *codec;
AVCodecContext *c = NULL;
int frame, got_picture, len = -1,temp=0;

AVFrame *rawFrame, *rgbFrame;
uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
char buf[1024];
char rawBuf[768000],rgbBuf[768000];

struct SwsContext *sws_ctx;

memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
avcodec_register_all();

c= avcodec_alloc_context3(codec);
if(c == NULL)
    __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec) {
    fprintf(stderr, "codec not found\n");
    fprintf(stderr, "codec = %p \n", codec);
    }
c->pix_fmt = AV_PIX_FMT_YUV420P;
c->width = 480;
c->height = 800;

rawFrame = av_frame_alloc();
rgbFrame = av_frame_alloc();

if (avcodec_open2(c, codec, NULL) < 0) {
    fprintf(stderr, "could not open codec\n");
    exit(1);
    }
sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
            PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
            SWS_BILINEAR, NULL, NULL, NULL);


frame = 0;

unsigned short *decodedpixels = &rawBuf;
rawFrame->data[0] = &rawBuf;
rgbFrame->data[0] = &rgbBuf;

fprintf(stderr,"size of avpkt %d \n",avpkt->size);
temp = avpkt->size;
while (temp > 0) {
        len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);

        if (len < 0) {
            fprintf(stderr, "Error while decoding frame %d\n", frame);
            exit(1);
            }
        temp -= len;
        avpkt->data += len;

        if (got_picture) {
            printf("saving frame %3d\n", frame);
            fflush(stdout);
        //TODO  
        //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]); 
        //  decodedpixels +=rawFrame->linesize[0];

            frame++;
            }

        }

avcodec_close(c);
av_free(c);
//free(rawBuf);
//free(rgbBuf);
av_frame_free(&rawFrame);
av_frame_free(&rgbFrame);


    



    }

    



    The log i get

    



    For the decode_inline() function :

    




    


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0


    



    Edit : Changing GOP value :

    



    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message

    


  • FFMPEG : TS video copied to MP4, missing 3 first frames [closed]

    21 août 2023, par Vasilis Lemonidis

    Running :

    


    ffmpeg -i test.ts -fflags +genpts -c copy -y test.mp4


    


    for this test.ts, which has 30 frames, readable by opencv, I end up with 28 frames, out of which 27 are readable by opencv. More specifically :

    


    ffprobe -v error -select_streams v:0 -count_packets  -show_entries stream=nb_read_packets -of csv=p=0 tmp.ts 


    


    returns 30.

    


    ffprobe -v error -select_streams v:0 -count_packets     -show_entries stream=nb_read_packets -of csv=p=0 tmp.mp4


    


    returns 28.

    


    Using OpenCV in that manner

    


    cap = cv2.VideoCapture(tmp_path)
readMat = []
while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
        readMat.append(frame)


    


    I get for the ts file 30 frames, while for the mp4 27 frames.

    


    Could someone explain why the discrepancies ? I get no error during the transformation from ts to mp4 :

    


    ffmpeg version N-111746-gd53acf452f Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 11.3.0 (GCC)
  configuration: --ld=g++ --bindir=/bin --extra-libs='-lpthread -lm' --pkg-config-flags=--static --enable-static --enable-gpl --enable-libaom --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libsvtav1 --enable-libdav1d --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-cuda-nvcc --enable-cuvid --enable-nvenc --enable-libnpp 
  libavutil      58. 16.101 / 58. 16.101
  libavcodec     60. 23.100 / 60. 23.100
  libavformat    60. 10.100 / 60. 10.100
  libavdevice    60.  2.101 / 60.  2.101
  libavfilter     9. 10.100 /  9. 10.100
  libswscale      7.  3.100 /  7.  3.100
  libswresample   4. 11.100 /  4. 11.100
  libpostproc    57.  2.100 / 57.  2.100
[mpegts @ 0x4237240] DTS discontinuity in stream 0: packet 5 with DTS 306003, packet 6 with DTS 396001
Input #0, mpegts, from 'tmp.ts':
  Duration: 00:00:21.33, start: 3.400000, bitrate: 15 kb/s
  Program 1 
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
  Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 300x300, 1 fps, 3 tbr, 90k tbn
Output #0, mp4, to 'test.mp4':
  Metadata:
    encoder         : Lavf60.10.100
  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 300x300, q=2-31, 1 fps, 3 tbr, 90k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[out#0/mp4 @ 0x423e280] video:25kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.192123%
frame=   30 fps=0.0 q=-1.0 Lsize=      26kB time=00:00:21.00 bitrate=  10.3kbits/s speed=1e+04x 


    


    Additional information

    


    The origin of the video I am processing comes from a continuous stitching operation of still images ts videos, produced by this class update method :

    


    import cv2
import os
import subprocess
from tempfile import NamedTemporaryFile
class VideoUpdater:
    def __init__(
        self, video_path: str, framerate: int, timePerFrame: Optional[int] = None
    ):
        """
        Video updater takes in a video path, and updates it using a supplied frame, based on a given framerate.
        Args:
            video_path: str: Specify the path to the video file
            framerate: int: Set the frame rate of the video
        """
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
            
        }
        self.ffmpeg = "/usr/bin/ffmpeg "

        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None


    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        """
        The writeFrame function takes a frame and writes it to the video file.
        Args:
            frame: np.ndarray: Write the frame to a temporary file
        """


        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            # stdout=subprocess.PIPE,
            # stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()