Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (97)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10197)

  • FFmpeg : Get better encoding out of my function

    31 octobre 2019, par A Person

    I needed some assistance on my task.

    I am using FFmpeg to burn time and the channel name onto the video.

    My goal is to record a stream that is html5 compatible with the following settings :

    Video wrapper MP4

    Video codec H.264

    Bitrate 1Mbps

    Audio codec AAC

    Audio bitrate 128Kbps

    And GPU encoding.

    This is what I am using :

    ffmpeg -hwaccel cuvid -y -i {udp} -vf "drawtext=fontfile=calibrib.tff:fontsize=25:text='{ChannelName} %{localtime}': x=10: y=10: fontcolor=white: box=1: boxcolor=0x000000" -pix_fmt yuv420p -vsync 1 -c:v h264_nvenc -r 25 -threads 0  -b:v 1M -profile:v main -minrate 1M -maxrate 1M -bufsize 10M -sc_threshold 0 -c:a aac -b:a 128k -ac 2 -ar 44100 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -bsf:v h264_mp4toannexb -t 00:30:00 {output}\{ChannelName}\{ChannelName}_{year}_{monthno}_{day}__{Hours}_{Minutes}_{Seconds}.mp4

    {ChannelName}_{year}_{monthno}_{day}__{Hours}_{Minutes}_{Seconds} are all variable holding information.

    {udp} holds the UDP stream link.

    I have done it this way as I have multiple UDP stream recording.

    Although this works, is there a better way for me to do this keeping in the -vf as I need the time and channel name.

    Currently, this uses between 0.8% to 1.9% GPU on my Quadro P4000. I don’t want to use more than this as I have more than 30 streams.

  • avformat/matroskaenc : Don't use size of inexistent Cluster

    22 janvier 2020, par Andreas Rheinhardt
    avformat/matroskaenc : Don't use size of inexistent Cluster
    

    In order to determine whether the current Cluster needs to be closed
    because of the limits on clustersize and clustertime,
    mkv_write_packet() would first get the size of the current Cluster by
    applying avio_tell() on the dynamic buffer holding the current Cluster.
    It did this without checking whether there is a dynamic buffer for
    writing Clusters open right now.

    In this case (which happens when writing the first packet)
    avio_tell() returned AVERROR(EINVAL) ; yet it is not good to rely on
    avio_tell() (or actually, avio_seek()) to handle the situation
    gracefully.

    Fixing this is easy : Only check whether a Cluster needs to be closed
    if a Cluster is in fact open.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/matroskaenc.c
  • How do I use FFMPEG/libav to access the data in individual audio samples ?

    15 octobre 2022, par Breadsnshreds

    The end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.

    &#xA;

    I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.

    &#xA;

    Based on the example above, I believe the logic is as follows :

    &#xA;

      &#xA;
    1. get the formatting info of the audio file
    2. &#xA;

    3. get audio stream info from the format
    4. &#xA;

    5. check that the codec required for the stream is an audio codec
    6. &#xA;

    7. get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
    8. &#xA;

    9. create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
    10. &#xA;

    11. the format reads the first frame from the audio file into our packet
    12. &#xA;

    13. pass that packet into the codec context to be decoded
    14. &#xA;

    15. pass our frame to the codec context to receive the uncompressed audio data of the first frame
    16. &#xA;

    17. create a buffer to hold the values and try allocating samples to it from our frame
    18. &#xA;

    &#xA;

    From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.

    &#xA;

    I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :

    &#xA;

    // Step 1 - get the formatting info of the audio file&#xA;    AVFormatContext* format = avformat_alloc_context();&#xA;    if (avformat_open_input(&amp;format, path.toStdString().c_str(), NULL, NULL) != 0) {&#xA;        qDebug() &lt;&lt; "Could not open file " &lt;&lt; path;&#xA;        return -1;&#xA;    }&#xA;&#xA;// Step 2 - get audio stream info from the format&#xA;    if (avformat_find_stream_info(format, NULL) &lt; 0) {&#xA;        qDebug() &lt;&lt; "Could not retrieve stream info from file " &lt;&lt; path;&#xA;        return -1;&#xA;    }&#xA;&#xA;// Step 3 - check that the codec required for the stream is an audio codec&#xA;    int stream_index =- 1;&#xA;    for (unsigned int i=0; inb_streams; i&#x2B;&#x2B;) {&#xA;        if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            stream_index = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (stream_index == -1) {&#xA;        qDebug() &lt;&lt; "Could not retrieve audio stream from file " &lt;&lt; path;&#xA;        return -1;&#xA;    }&#xA;&#xA;// Step 4 -get the codec context&#xA;    const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);&#xA;    AVCodecContext *codecContext = avcodec_alloc_context3(codec);&#xA;    avcodec_open2(codecContext, codec, NULL);&#xA;&#xA;// Step 5 - create an empty packet and frame to use&#xA;    AVPacket *packet = av_packet_alloc();&#xA;    AVFrame *frame = av_frame_alloc();&#xA;&#xA;// Step 6 - the format reads the first frame from the audio file into our packet&#xA;    av_read_frame(format, packet);&#xA;// Step 7 - pass that packet into the codec context to be decoded&#xA;    avcodec_send_packet(codecContext, packet);&#xA;//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame&#xA;    avcodec_receive_frame(codecContext, frame);&#xA;&#xA;// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame&#xA;    double *buffer;&#xA;    av_samples_alloc((uint8_t**) &amp;buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);&#xA;    qDebug () &lt;&lt; "packet: " &lt;&lt; &amp;packet;&#xA;    qDebug() &lt;&lt; "frame: " &lt;&lt;  frame;&#xA;    qDebug () &lt;&lt; "buffer: " &lt;&lt; buffer;&#xA;

    &#xA;

    For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.

    &#xA;