Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (14)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (4570)

  • Capture CMOS video with FPGA, encode and send over Ethernet

    23 décembre 2015, par ya_urock

    I am planning a open source university project for my students based on Zynq Xilinx FPGA that will capture CMOS video, encode it into transport stream and send it over Ethernet to remote PC. Basically I want to design yet another IP camera. I have strong FPGA experience, but lack knowledge regarding encoding and transfering video data. Here is my plan :

    1. Connect CMOS camera to FPGA, recieve video frames and save them to external DDR memory, verify using HDMI output to monitor. I have no problems with that.

    2. I understand that I have to compress my video stream for example to H.264 format and put into transport stream. Here I have little knowledge and require some hints.

    3. After I form transport stream I can send it over network using UDP packets. I have working hardware solution that reads data from FIFO and sends it to remote PC as UDP papckets.

    4. And finally I plan to receive and play video using ffmpeg library.

      ffplay udp://localhost:5678

    My question is basically regarding 2 step. How do I convert pixel frames to transport stream ? My options are :

    1. Use commercial IP, like

    Here I doubt that they are free to use and we don’t have much funds.

    1. Use open cores like

      • http://sourceforge.net/projects/hardh264/ - here core generates only h264 output, but how to encapsulate it into transport stream ?
      • I have searched opencores.org but with no success on this topic
      • Maybe somebody knows some good open source relevant FPGA projects ?
    2. Develop harware encoder by myself using Vivado HLS (C Language). But here is the problem that I don’t know the algorithm. Maybe I could gig ffmpeg or Cisco openh264 library and find there a function that converts raw pixel frames to H.264 format and then puts it into transport stream ? Any help would be appriciated here also.

    Also I am worried about format compatibility of stream I might generate inside FPGA and the one expected at host by ffplay utility. Any help, hints, links and books are appriciated !

  • FATE Under New Management

    2 août 2010, par Multimedia Mike — FATE Server

    At any given time, I have between 20-30 blog posts in some phase of development. Half of them seem to be contemplations regarding the design and future of my original FATE system and are thus ready for the recycle bin at this point. Mans is a man of considerably fewer words, so I thought I would use a few words to describe the new FATE system that he put together.

    Overview
    Here are the distinguishing features that Mans mentioned in his announcement message :

    • Test specs are part of the ffmpeg repo. They are thus properly versioned, and any developer can update them as needed.
    • Support for inexact tests.
    • Parallel testing on multi-core systems.
    • Anyone registered with FATE can add systems.
    • Client side entirely in POSIX shell script and GNU make.
    • Open source backend and web interface.
    • Client and backend entirely decoupled.
    • Anyone can contribute patches.

    Client
    The FATE build/test client source code is contained in tests/fate.sh in the FFmpeg source tree. The script — as the extension implies — is a shell script. It takes a text file full of shell variables, updates source code, configures, builds, and tests. It’s a considerably minor amount of code, especially compared to my original Python code. Part of this is because most of the testing logic has shifted into FFmpeg itself. The build system knows about all the FATE tests and all of the specs are now maintained in the codebase (thanks to all who spearheaded that effort— I think it was Vitor and Mans).

    The client creates a report file which contains a series of lines to be transported to the server. The first line has some information about the configuration and compiler, plus the overall status of the build/test iteration. The second line contains ’./configure’ information. Each of the remaining lines contain information about an individual FATE test, mostly in Base64 format.

    Server
    The server source code lives at http://git.mansr.com/?p=fateweb. It is written in Perl and plugs into a CGI-capable HTTP server. Authentication between the client and the server operates via SSH/SSL. In stark contrast to the original FATE server, there is no database component on the backend. The new system maintains information in a series of flat files.

  • How to adjust audio volume with AVFilter API from FFmpeg in Linux ?

    20 décembre 2024, par wangt13

    I am working on an embedded Linux system (kernel-5.10.24), and developing an audio player by using FFmpeg and ALSA libraries.

    


    Now the basic functions are done, and I can play MP3 file by using AVCodec and SWresample interfaces of FFmpeg.

    


    int decode_play(void)
{
    .....
    while (1) {
        if (av_read_frame(pfmtctx, packet) < 0) {
            avcodec_flush_buffers(pcodectx);
            printf("Got end of media, breaking\n");
            break;
        }
        if (packet->stream_index != stream) {
            continue;
        }
        int res = avcodec_send_packet(pcodectx, packet);
        if (res < 0) {
            printf("Error in decoding audio frame.\n");
            break;
        }

        while (res >= 0) {
            res = avcodec_receive_frame(pcodectx, pframe);
            if (res == AVERROR(EAGAIN)) {
                break;
            } else if (res == AVERROR_EOF) {
                break;
            } else if (res >= 0) {
                int outsamples = swr_convert(swrctx, &buf, pcodectx->frame_size, (const uint8_t **)pframe->data, pframe->nb_samples);
                snd_pcm_write(pcm_handle, buf, outsamples);
            }
        }
    }
}


    


    Now, I want to add volume changing to this audio player.
    
I found that there is AVfilter functions in FFmpeg, which can be used to adjust audio's volume. How can I use it in current design ?
I am not sure what filters should I use, and where should I plugin the filters ?
Should I put the filters before swr_convert or after the swr_convert ?

    


    With Erdal's help, I changed the player as follows,

    


    int decode_play(void)
{
    /* Init filter graph as filter_audio.c does */
    init_filter_graph(&graph, &src, &sink);

    while (1) {
        if (av_read_frame(pfmtctx, packet) < 0) {
            avcodec_flush_buffers(pcodectx);
            printf("Got end of media, breaking\n");
            break;
        }
        if (packet->stream_index != stream) {
            continue;
        }
        int res = avcodec_send_packet(pcodectx, packet);
        if (res < 0) {
            printf("Error in decoding audio frame.\n");
            break;
        }

        while (res >= 0) {
            res = avcodec_receive_frame(pcodectx, pframe);
            if (res == AVERROR(EAGAIN)) {
                break;
            } else if (res == AVERROR_EOF) {
                break;
            } else if (res >= 0) {
                int ret = av_buffersrc_add_frame(src, pframe);
                if (ret < 0) {
                    continue;
                }
                while ((ret = av_buffersink_get_frame(sink, filt_frame)) >= 0) {
                    int nb_samples = av_rescale_rnd(filt_frame->nb_samples, RESAMPLE_SAMPLE_RATE, filt_frame->sample_rate, AV_ROUND_UP);
                    printf("nb_samples: %d, f.nb_sample: %d, f.sr: %d\n", nb_samples, filt_frame->nb_samples, filt_frame->sample_rate);

                    int outs = snd_pcm_writei(playback_handle, filt_frame->extended_data[0], nb_samples);
                    if (outs < 0) {
                        snd_pcm_prepare(playback_handle);
                    }
                }
            }
        }
    }
}


    


    The original audio in format of 44100 Hz, stereo, fltp. And I chose to use aformat to do resampling instead of swresample as follows.

    


        snprintf(options_str, sizeof(options_str),
            "sample_fmts=%s:sample_rates=%d:channel_layouts=stereo",
            av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 32000);
    if (avfilter_init_str(aformat_ctx, options_str) < 0) {
        printf("error init aformat filter");
        return -1;
    }


    


    I succeeded playing the resampled audio with Avfilter.