Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (6957)

  • parser not found for codec wmav2

    5 décembre 2011, par HashCoder

    I am getting a warning when I run the below command.

    Warning : [asf @ 01C787A0] parser not found for codec wmav2, packets or times may be inval
    id.

    I am using the latest ffmpeg.exe, did I miss any parameters. Any suggestions please.

    ffmpeg -i Assets\Logitech_webcam_on_PC.wmv -sameq -f swf -y -an -s 640x360 MySlide.swf
    ffmpeg version N-35295-gb55dd10, Copyright (c) 2000-2011 the FFmpeg developers
     built on Nov 30 2011 00:52:52 with gcc 4.6.2
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-ru
    ntime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libope
    ncore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --en
    able-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger -
    -enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwben
    c --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-
    libxvid --enable-zlib
     libavutil    51. 29. 1 / 51. 29. 1
     libavcodec   53. 39. 1 / 53. 39. 1
     libavformat  53. 22. 0 / 53. 22. 0
     libavdevice  53.  4. 0 / 53.  4. 0
     libavfilter   2. 50. 0 /  2. 50. 0
     libswscale    2.  1. 0 /  2.  1. 0
     libpostproc  51.  2. 0 / 51.  2. 0
    [asf @ 01C787A0] parser not found for codec wmav2, packets or times may be inval
    id.

    Seems stream 1 codec frame rate differs from container frame rate: 1000.00 (1000
    /1) -> 0.08 (1/12)
    Input #0, asf, from 'Assets\Logitech_webcam_on_PC.wmv':
     Metadata:
       WMFSDKVersion   : 11.0.5721.5265
       WMFSDKNeeded    : 0.0.0.0000
       IsVBR           : 1
       VBR Peak        : 50500.0000
       Buffer Average  : 66550.0000
     Duration: 00:00:36.22, start: 0.000000, bitrate: 497 kb/s
       Stream #0:0(eng): Audio: wmav2 (a[1][0][0] / 0x0161), 32000 Hz, 1 channels,
    s16, 20 kb/s
       Stream #0:1(eng): Video: wmv2 (WMV2 / 0x32564D57), yuv420p, 320x180, 422 kb/
    s, 0.08 tbr, 1k tbn, 1k tbc
    [buffer @ 02AA9760] w:320 h:180 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param:
    [scale @ 02AA9A80] w:320 h:180 fmt:yuv420p -> w:640 h:360 fmt:yuv420p flags:0x4
    Output #0, swf, to 'MySlide.swf':
     Metadata:
       WMFSDKVersion   : 11.0.5721.5265
       WMFSDKNeeded    : 0.0.0.0000
       IsVBR           : 1
       VBR Peak        : 50500.0000
       Buffer Average  : 66550.0000
       encoder         : Lavf53.22.0
       Stream #0:0(eng): Video: flv1, yuv420p, 640x360, q=2-31, 200 kb/s, 90k tbn,
    0.08 tbc
    Stream mapping:
     Stream #0:1 -> #0:0 (wmv2 -> flv)
    Press [q] to stop, [?] for help
    frame=    4 fps=  0 q=0.0 size=      97kB time=00:00:48.00 bitrate=  16.6kbits/s
    frame=    5 fps=  0 q=0.0 Lsize=     111kB time=00:01:00.00 bitrate=  15.2kbits/
    s dup=0 drop=599
    video:111kB audio:0kB global headers:0kB muxing overhead 0.128646%
  • Rawvideo to mp4 container

    21 février 2020, par Expressingx

    How can I stream rawvideo codec to mp4 container ? The error is Could not find tag for codec rawvideo in stream #0, codec not currently supported in container. Url is for example video=Logitech HD Webcam C270 and Format is dshow. Filename is lets say out.mp4

    AVFormatContext* pInputFmtCtx = avformat_alloc_context();
    AVInputFormat* inputFormat = av_find_input_format(Format);

    avformat_open_input(&pInputFmtCtx, url, inputFormat, null);
    if (avformat_find_stream_info(pInputFmtCtx, null) != -1)
      ... find stream index

    AVCodec* videoDecoder = avcodec_find_decoder(pInputFmtCtx->streams[_vidStreamIndex]->codecpar->codec_id);

    AVCodecContext* videcCodecCtx = avcodec_alloc_context3(videoDecoder);
    avcodec_parameters_to_context(videcCodecCtx, videoCodecParams);

    avcodec_open2(videcCodecCtx, videoDecoder, null);

    // and the output context
    AVFormatContext* pOutputFmtCtx = null;
    avformat_alloc_output_context2(&pOutputFmtCtx, null, null, fileName);

    // iterate over streams of input context and when we find it
    AVStream* in_stream = pInputFmtCtx->streams[i];
    AVCodecParameters* in_codecpar = in_stream->codecpar;

    AVStream* out_stream = avformat_new_stream(pOutputFmtCtx, null);

    // init h264 encoder
    AVCodec* videoEncoder = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_H264);
    pVideoEncoderCodecContext = ffmpeg.avcodec_alloc_context3(videoEncoder);

    pVideoEncoderCodecContext->time_base = videcCodecCtx->time_base;
    pVideoEncoderCodecContext->framerate = videcCodecCtx->framerate;
    pVideoEncoderCodecContext->width = videcCodecCtx->width;
    pVideoEncoderCodecContext->height = videcCodecCtx->height;
    pVideoEncoderCodecContext->bit_rate = videcCodecCtx->bit_rate;
    pVideoEncoderCodecContext->gop_size = videcCodecCtx->gop_size;
    pVideoEncoderCodecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;
    pVideoEncoderCodecContext->flags |= ffmpeg.AV_CODEC_FLAG_GLOBAL_HEADER;

    // copy parameters to outstream codec
    avcodec_parameters_copy(out_stream->codecpar, in_codecpar);

    ....

    // after that
    avio_open(&pOutputFmtCtx->pb, fileName, AVIO_FLAG_WRITE);
    avformat_write_header(pOutputFmtCtx, &opts);

    // and reading
    while (av_read_frame(pInputFormatContext, pkt) >= 0)

    // decode
    avcodec_send_packet(videcCodecCtx, pkt);
    //receive the raw frame from the decoder
    avcodec_receive_frame(videcCodecCtx, frame);

    // now encode if its video packet
    int ret = avcodec_send_frame(pVideoEncoderCodecContext, frame);
    if (ret < 0)
    {
       continue;
    }

    while (ret >= 0)
    {
       ret = avcodec_receive_packet(pVideoEncoderCodecContext, packet);

       if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
       {
              return;
       }

       av_packet_rescale_ts(packet, pVideoEncoderCodecContext->time_base, pOutputFmtCtx->streams[packet->stream_index]->time_base);
       av_interleaved_write_frame(pOutputFmtCtx, pkt);

       av_packet_unref(packet);
    }

    This works fine if the camera streams H264, but if the camera streams rawvideo, it doesn’t stream to the file and throws error.

    EDIT As suggested I’m trying to encode it now, but avcodec_send_frame() returns -22 and nothing is saved to the file. Where do I miss ? Code edited.

  • Opencv RTSP streaming with ffmpeg and gstreamer

    14 septembre 2015, par ironman

    I am using an ip camera which has mainstream (with resolution 1920x1080) and substream (with resolution 720x576). My aim is to detect the motion by using the substream, if motion occurs I take a snopshot from the mainstream and do some processing on this image. Here is my code’

       VideoCapture cap;   //video capture device captures the pal stream
       VideoCapture cap2;  //video capture device captures the main stream

       // cap.set(CV_CAP_PROP_BUFFERSIZE,1);
       // cap2.set(CV_CAP_PROP_BUFFERSIZE,1);
       //cap.set(CV_CAP_GSTREAMER_QUEUE_LENGTH,1);
       //cap2.set(CV_CAP_GSTREAMER_QUEUE_LENGTH,1);
       cap.open("rtsp://usr:pass@x.x.x.x:554/Streaming/Channels/2?transportmode=unicast&profile=Profile_2",CAP_FFMPEG);  //open substream
       cap2.open("rtsp:///usr:pass@x.x.x.x:554/Streaming/Channels/1?transportmode=unicast&profile=Profile_1",CAP_FFMPEG);  //open mainstream
       bool frame_read = false;
       int motion val;

       while (true) {
           frame_read = cap.read(rgb_im); //read the frame from substream
           //cap2.grab();
           if (!frame_read) {
               break;
           }
           cvtColor(rgb_im, gray_im, CV_BGR2GRAY);
           motion_val = detect_motion(gray_im);   //find the motion value

           if (motion_val > MOTION_OCCURRED)    //check if motion occurs
           {
               cap2>>frame_big;    //get one frame from the main stream
               process(frame_big);  //do processing
           }
           imshow("1", rgb_im);
               if (waitKey(1) >= 0)
                   break;
       }`

    When, I open the stream with CAP_FFMPEG flag latency is very low (under 1 sec). As seen above I regularly read the substream and if motion occurs I read the mainstream. But the frames which I read from the mainstream is not synchronous with the substream. Most probably I grab the frames which waits in the buffer. So I miss the frame with motion and I get an older frame. How can I handle this issue ? Somehow I have to make buffer size 1 frame but I cannot find any way.

    I have tried
    cap.set(CV_CAP_PROP_BUFFERSIZE,1); but since it needs DC1394 support it does not solve my problem.

    Secondly, I have tried cap2.grab() after I read the substream but it increases the latency(make latency about 3 seconds).

    Thirdly, I have tried to open videocapture objects with cv_cap_gstreamer flag cap.open("rtsp://usr:pass@x.x.x.x:554/Streaming/Channels/2transportmode=unicast&profile=Profile_2",CV_CAP_GSTREAMER); . It solves my buffering problem. In other words, when I detect the motion from the substream, I am able to capture the same instant from the mainstream. But with the gstreamer I have huge delay about 3 seconds which is not desirable for my case.(with gstreamer I tried to read several rtsp streams with different resolutions and latency remains the same) How can I solve the latency issue when I am using gstreamer with opencv ?