Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (67)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (12065)

  • How to save AVPacket if I have input information from online camera

    31 mars 2020, par Orest

    I am new to libav.
I have a online video camera and want save video from archive to the video file with libav

    



    Camera provides such data

    



    uint32_t frameType, // I frame or P frame

void *frame, //pointer to the frame

size_t frameSize, //size of the frame in bytes

uint64_t timeStamp, //time stamp in time_t units

uint32_t width, //frame width

uint32_t height, //frame heigh

uint32_t genTime, //I do not now what is this. allways 0

const char *encodingType //H264 or H265


    



    I tried this

    



    void writeHeader(){
mOutputFilePath = outputFilePath;
    int ret = 0;
    avformat_alloc_output_context2(&output_format_context, nullptr, nullptr, outputFilePath.c_str());

AVStream *out_stream;
        out_stream = avformat_new_stream(output_format_context, nullptr);

        out_stream->discard = AVDISCARD_DEFAULT;//не змінювати
        out_stream->codecpar->level = 42;//не змінювати
        out_stream->codecpar->profile = FF_PROFILE_H264_HIGH;//не змінювати
        out_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;

        if(codecID == "H264") out_stream->codecpar->codec_id = AV_CODEC_ID_H264;
        else if(codecID == "H265") out_stream->codecpar->codec_id = AV_CODEC_ID_H265;

        out_stream->codecpar->format = AV_PIX_FMT_YUV420P;
        out_stream->codecpar->height = heightFrame;
        out_stream->codecpar->width = widthFrame;
        //        out_stream->codecpar->bit_rate = 2478235;
        //        out_stream->codecpar->bits_per_coded_sample = 24;
        //        out_stream->codecpar->bits_per_raw_sample = 8;
        out_stream->codecpar->sample_aspect_ratio.num = 0;
        out_stream->codecpar->sample_aspect_ratio.den = 1;
        out_stream->codecpar->color_primaries = AVCOL_PRI_UNSPECIFIED;//не змінювати

avio_open(&output_format_context->pb, mOutputFilePath.c_str(), AVIO_FLAG_WRITE);
avformat_write_header(output_format_context, &opt);
}

void writePacket(){
 AVPacket inputPacket;
        av_init_packet(&inputPacket);
        inputPacket.buf = NULL;
        inputPacket.pts = (int)timeStamp;
        inputPacket.dts = inputPacket.pts; 
        inputPacket.data = (unsigned char*)frame;
        inputPacket.size = (int)frameSize;

        if (frameType == KP2P_FRAME_TYPE_IFRAME)
        {
            inputPacket.flags = AV_PKT_FLAG_KEY;
        }
        inputPacket.duration = 0;
        inputPacket.pos = -1;
av_interleaved_write_frame(output_format_context, &inputPacket);
    av_packet_unref(&inputPacket);
}

void closeFile()
{
av_write_trailer(output_format_context);
    if (output_format_context && !(output_format_context->oformat->flags & AVFMT_NOFILE))
        avio_closep(&output_format_context->pb);
    avformat_free_context(output_format_context);
}


    



    in output file I have black vindow and time is not correct (input 30 seconds in out 2 seconds)
What am I doing wrong ?

    


  • Is it possible to grab a frame from a video stream and save it as png in another stream with ffmpeg ?

    9 février 2020, par Lázár Zsolt

    I am trying to use FFMpeg with System.IO.Process to convert a seek-able in-memory video stream into a thumbnail. Piping the thumbnail out through stdout isn’t a problem, but piping in the video is tricky.

    My current code copies the entire video stream into stdin, which is very slow and unnecessary, because ffmpeg obviously doesn’t need the entire file to get the first frame. Writing the stream to the file system and specifying its path as an input argument is also very slow, because the source video can be several gigabytes.

    I have tried accomplishing this using existing libraries, such as AForge, FFMpegCore, Xabe.FFMpeg, xFFMpeg.NET and Accord.FFMPEG.Video, but unfortunately they can only work with actual files, not streams, and my input video is not available as a file.

    The stream object that supplies the video fully implements seeking and random access reading functionalities, just like a file stream, so there is literally no valid reason for this to not be possible, besides the limitations of the APIs (or my knowledge).

    As a last resort, I could use the Dokan.NET filesystem driver to expose the video stream as a virtual file so ffmpeg can read it, but that would be an extreme overkill and I’m looking for a better solution.

    Below is my current code. For the sake of simplicity, I am emulating the input video stream with a FileStream.

    var process = new Process();
    process.StartInfo.FileName = "ffmpeg.exe";
    process.StartInfo.Arguments = "-i - -ss 00:00:01 -vframes 1 -q:v 2 -c:v png -f image2pipe -";
    process.StartInfo.RedirectStandardInput = true;
    process.StartInfo.RedirectStandardOutput = true;
    process.StartInfo.UseShellExecute = false;
    process.StartInfo.CreateNoWindow = true;
    process.Start();

    var stream = File.OpenRead("test.mp4");
    stream.CopyTo(process.StandardInput.BaseStream);
    process.StandardInput.BaseStream.Flush();
    process.StandardInput.BaseStream.Close();

    var stream2 = File.Create("test.png");
    var buffer = new byte[4096];
    int read;
    while((read = process.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length)) > 0)
       stream2.Write(buffer, 0, read);

    EDIT :
    It might be useful to clarify what kind of data does the input stream contain. It is basically a video file that can be in any commonly used format (avi, mp4, mov, ts, mkv, wmv,...). The extension of the video (as if it was a file) is also known.

  • FFmpeg : MD5 hash of M3U8 playlists generated from same input video with different segment durations (after applying video filter) don't match

    30 juillet 2020, par Saurabh P Bhandari

    Here are a few commands I am using to convert and transize a video in MP4 format to a M3U8 playlist.

    


    For a given input video (MP4 format), generate multiple video segments with segment duration 30 seconds.

    


    ffmpeg -loglevel error -i input.mp4 -dn -sn -an -c:v copy -bsf:v h264_mp4toannexb -copyts -start_at_zero -f segment -segment_time 30 30%03d.mp4 -dn -sn -vn -c:a copy audio.aac


    


    Apply a video filter (in this case scaling) on each segment and convert it to a M3U8 format.

    


    ls 30*.mp4 | parallel 'ffmpeg -loglevel error -i {} -vf scale=-2:144 -hls_list_size 0 {}.m3u8'


    


    Store the list of m3u8 files generated in list.txt in this format file 'segment-name.m3u8'

    


    for f in 30*.m3u8; do echo "file '$f'" >> list.txt; done


    


    Using concat demuxer, combine all segment files (which are in M3U8 format) and the audio to get one final m3u8 playlist pointing to segments with duration of 10 seconds.

    


    ffmpeg -loglevel error -f concat -i list.txt -i audio.aac -c copy -hls_list_size 0 -hls_time 10 output_30.m3u8


    



    


    I can change the segment duration in the first step from 30 seconds to 60 seconds, and compare the MD5 hash of the final M3U8 playlist generated in both the cases using this command :

    


    ffmpeg -loglevel error -i <input m3u8="m3u8" playlist="playlist" /> -f md5 -&#xA;

    &#xA;

    The MD5 hash of the output files differ, i.e., video streams of output_30.m3u8 and output_60.m3u8 are not the same.

    &#xA;

    Can anyone elaborate on this ?

    &#xA;

    (I expected the MD5 hash to be the same)

    &#xA;