Recherche avancée

Médias (1)

Mot : - Tags -/stallman

Autres articles (66)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (15130)

  • Including ffmpeg in c program/application

    11 mars 2016, par user14598204

    I want to make a C program in which I want to directly use actual functions of ffmpeg. Right now I’m using ’system("ffmpeg command")’ to run ffmpeg
    command in my C program. This directly or indirectly executes in the terminal of my Ubuntu Linux. However this frequently opens terminal, runs the command and
    closes terminal. Thus multiple terminals are working at a time which is not a good solution.

    I want to include/embed ffmpeg command in my program so shtat I can directly access the actual fumctions which are defined in the ffmpeg source. I’m working
    on a C project in which I need to make a complete firmware which runs this single program so that I don’t have to rely on the multiple terminals and can call the
    function directly.

    So how can I achieve my goal to integrate/embed/include ffmpeg in my C program ?
    Any useful links, suggestions are gratey appreciated.

  • How AVCodecContext bitrate, framerate and timebase is used when encoding single frame

    28 mars 2023, par Cyrus

    I am trying to learn FFmpeg from examples as there is a tight schedule. The task is to encode a raw YUV image into JPEG format of the given width and height. I have found examples from ffmpeg official website, which turns out to be quite straight-forward. However there are some fields in AVCodecContext that I thought only makes sense when encoding videos(e.g. bitrate, framerate, timebase, gopsize, max_b_frames etc).

    


    I understand on a high level what those values are when it comes to videos, but do I need to care about those when I just want a single image ? Currently for testing, I am just setting them as dummy values and it seems to work. But I want to make sure that I am not making terrible assumptions that will break in the long run.

    


    EDIT :

    


    Here is the code I got. Most of them are copy and paste from examples, with some changes to replace old APIs with newer ones.

    


    #include "thumbnail.h"
#include "libavcodec/avcodec.h"
#include "libavutil/imgutils.h"
#include 
#include 
#include 

void print_averror(int error_code) {
    char err_msg[100] = {0};
    av_strerror(error_code, err_msg, 100);
    printf("Reason: %s\n", err_msg);
}

ffmpeg_status_t save_yuv_as_jpeg(uint8_t* source_buffer, char* output_thumbnail_filename, int thumbnail_width, int thumbnail_height) {
    const AVCodec* mjpeg_codec = avcodec_find_encoder(AV_CODEC_ID_MJPEG);
    if (!mjpeg_codec) {
        printf("Codec for mjpeg cannot be found.\n");
        return FFMPEG_THUMBNAIL_CODEC_NOT_FOUND;
    }

    AVCodecContext* codec_ctx = avcodec_alloc_context3(mjpeg_codec);
    if (!codec_ctx) {
        printf("Codec context cannot be allocated for the given mjpeg codec.\n");
        return FFMPEG_THUMBNAIL_ALLOC_CONTEXT_FAILED;
    }

    AVPacket* pkt = av_packet_alloc();
    if (!pkt) {
        printf("Thumbnail packet cannot be allocated.\n");
        return FFMPEG_THUMBNAIL_ALLOC_PACKET_FAILED;
    }

    AVFrame* frame = av_frame_alloc();
    if (!frame) {
        printf("Thumbnail frame cannot be allocated.\n");
        return FFMPEG_THUMBNAIL_ALLOC_FRAME_FAILED;
    }

    // The part that I don't understand
    codec_ctx->bit_rate = 400000;
    codec_ctx->width = thumbnail_width;
    codec_ctx->height = thumbnail_height;
    codec_ctx->time_base = (AVRational){1, 25};
    codec_ctx->framerate = (AVRational){1, 25};

    codec_ctx->gop_size = 10;
    codec_ctx->max_b_frames = 1;
    codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
    int ret = av_image_fill_arrays(frame->data, frame->linesize, source_buffer, AV_PIX_FMT_YUV420P, thumbnail_width, thumbnail_height, 32);
    if (ret < 0) {
        print_averror(ret);
        printf("Pixel format: yuv420p, width: %d, height: %d\n", thumbnail_width, thumbnail_height);
        return FFMPEG_THUMBNAIL_FILL_FRAME_DATA_FAILED;
    }

    ret = avcodec_send_frame(codec_ctx, frame);
    if (ret < 0) {
        print_averror(ret);
        printf("Failed to send frame to encoder.\n");
        return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
    }

    ret = avcodec_receive_packet(codec_ctx, pkt);
    if (ret < 0) {
        print_averror(ret);
        printf("Failed to receive packet from encoder.\n");
        return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
    }

    // store the thumbnail in output
    int fd = open(output_thumbnail_filename, O_CREAT | O_RDWR);
    write(fd, pkt->data, pkt->size);
    close(fd);

    // freeing allocated structs
    avcodec_free_context(&codec_ctx);
    av_frame_free(&frame);
    av_packet_free(&pkt);
    return FFMPEG_SUCCESS;
}


    


  • No such file or directory : 'ffprobe' : 'ffprobe'

    27 novembre 2023, par Jack McCumber

    I'm hoping someone can point me in the right direction. I'm currently trying to build a python GUI that plays clips of sounds from a specified file and allows the user to annotate it. All the research I've done has pointed my to using pydub. I'm using the following snippet :

    


    from pydub import AudioSegment
from pydub.playback import play

song = AudioSegment.from_wav("beepboop.mp3")
play(song)


    


    However, I'm currently getting the following error :

    


    Warning (from warnings module):&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pydub/utils.py", line 170&#xA;    warn("Couldn&#x27;t find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)&#xA;RuntimeWarning: Couldn&#x27;t find ffmpeg or avconv - defaulting to ffmpeg, but may not work&#xA;&#xA;Warning (from warnings module):&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pydub/utils.py", line 198&#xA;    warn("Couldn&#x27;t find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning)&#xA;RuntimeWarning: Couldn&#x27;t find ffprobe or avprobe - defaulting to ffprobe, but may not work&#xA;Traceback (most recent call last):&#xA;  File "/Users/jack/Desktop/code-repo/convert-csv-to-json/cleaner.py", line 7, in <module>&#xA;    song = AudioSegment.from_wav("beepboop.mp3")&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pydub/audio_segment.py", line 808, in from_wav&#xA;    return cls.from_file(file, &#x27;wav&#x27;, parameters=parameters)&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pydub/audio_segment.py", line 728, in from_file&#xA;    info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit)&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pydub/utils.py", line 274, in mediainfo_json&#xA;    res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE)&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 775, in __init__&#xA;    restore_signals, start_new_session)&#xA;  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 1522, in _execute_child&#xA;    raise child_exception_type(errno_num, err_msg, err_filename)&#xA;FileNotFoundError: [Errno 2] No such file or directory: &#x27;ffprobe&#x27;: &#x27;ffprobe&#x27;&#xA;</module>

    &#xA;

    I've downloaded both ffprobe and ffmpeg from the official website, extracted the files, and installed to usr/local/bin based on input I've read in other StackOverflow comments. When I run :

    &#xA;

    print(os.environ[&#x27;PATH&#x27;])&#xA;

    &#xA;

    I get :

    &#xA;

    /usr/bin:/bin:/usr/sbin:/sbin&#xA;

    &#xA;

    Also, MacOSX won't let me manually drag the executable into /usr/bin or /usr/sbin. Nor will it let me copy it into the directory using :

    &#xA;

    $ sudo cp ffmpeg /usr/bin&#xA;

    &#xA;

    When I use :

    &#xA;

    pip3 install ffprobe-python&#xA;

    &#xA;

    I get :

    &#xA;

    Requirement already satisfied: ffprobe-python in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (1.0.3)&#xA;

    &#xA;

    I'll add that when I try and use the "apt install" method, it barks at me and says my version of MacOSX isn't supported.

    &#xA;