Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (15805)

  • avutil : add hwcontext_amf.

    15 octobre 2024, par Dmitrii Ovchinnikov
    avutil : add hwcontext_amf.
    

    Adds hwcontext_amf, enabling a shared AMF context for encoders,
    decoders, and AMF-based filters, without copy to the host memory.
    Code also was tested in HandBrake.

    Benefits :
    - Optimizations for direct video memory access from CPU
    - Significant performance boost in full AMF pipelines with filters
    - Integration of GPU filters like VPP, Super Resolution, and
    Compression Artefact Removal(in future plans)
    - VCN power management control for decoders.
    - Ability to specify which VCN instance to use for decoding
    (like for encoder)
    - AMD will soon introduce full AMF API for multimedia accelerator MA35D
    - With AMF API, integration will be much easier :
    GPU and the accelerator will have the same API
    - including encoder, decoder, scaler, color converter,
    Windows and Linux.
    Learn more :
    https://www.amd.com/en/products/accelerators/alveo/ma35d.html

    Changes by versions :
    v2 : Header file cleanup.
    v3 : Removed an unnecessary class.
    v4 : code cleanup and improved error handling
    v5 : Fixes related to HandBrake integration.
    v6 : Sequential filters error and memory leak have been fixed.

    • [DH] libavutil/Makefile
    • [DH] libavutil/hwcontext.c
    • [DH] libavutil/hwcontext.h
    • [DH] libavutil/hwcontext_amf.c
    • [DH] libavutil/hwcontext_amf.h
    • [DH] libavutil/hwcontext_amf_internal.h
    • [DH] libavutil/hwcontext_internal.h
    • [DH] libavutil/pixdesc.c
    • [DH] libavutil/pixfmt.h
  • ffmpeg library pcm to ac3 encoding

    16 juillet 2014, par Dave Camp

    I’m new to the ffmpeg library and Im working on a custom directshow filter. I decided to use the ffmpeg library for the encoding of what I need to achieve. I’m a little confused with some of the parameters and the correct values with what ffmpeg is expecting.

    I’m currently working on the ac3 part of the custom filter.
    I’ve looked through the example of the encoding audio ( for MP2 encoding ) in the ffmpeg doc and I understand it, but I don’t understand how I should adapt it to my specific needs.

    The incoming samples are at 48K samples per second 16bit per sample and are stereo interleaved. The upstream filter is giving them to me at 25fps so I get an incoming ’audio sample packet’ of 1920 bytes for each audio frame. I want to encode that data into an ac3 data packet that I pass on to the next process that I’ll be doing myself.

    But I’m unsure of the correct parameters for each component in the following code...

    The code I have so far. There are several questions in the comments at key places.

    AVCodec*         g_pCodec = nullptr;
    AVCodecContext*  g_pContext = nullptr;
    AVFrame*         g_pFrame = nullptr;
    AVPacket         g_pPacket;
    LPVOID           g_pInSampleBuffer;

    avcodec_register_all();
    g_pCodec = avcodec_find_encoder(CODEC_ID_AC3);

    // What am I'm describing here? the incoming sample params or the outgoing sample params?
    // An educated guess is the outgoing sample params
    g_pContext = avcodec_alloc_context3(g_pCodec);
    g_pContext->bit_rate = 448000;
    g_pContext->sample_rate = 48000;
    g_pContext->channels = 2;
    g_pContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
    g_pContext->channel_layout = AV_CH_LAYOUT_STEREO;

    // And this is the incoming sample params?
    g_pFrame = av_frame_alloc();
    g_pFrame->nb_samples = 1920; ?? What figure is the codec expecting me to give it here? 1920 / bytes_per_sample?
    g_pFrame->format = AV_SAMPLE_FMT_S16;
    g_pFrame->channel_layout = AV_CH_LAYOUT_STEREO;

    // I assume this going to give me the size of a buffer that I use to fill with my incoming samples? I get a dwSize of 15360 but my samples are only coming in at 1920, does this matter?
    dwSize = av_samples_get_buffer_size(nullptr,2,1920,AV_SAMPLE_FMT_S16,0);

    // do I need to use av_malloc and copy my samples into g_pInSampleBuffer or can I supply the address of my own buffer ( created outside of the libav framework ) ?
    g_pInSampleBuffer = (LPVOID)av_malloc(dwSize)
    avcodec_fill_audio_frame(g_pFrame,2,AV_SAMPLE_FMT_S16,(const uint8_t*)g_pInSampleBuffer,*dwSize,0);

    // Encoding loop - samples are given to me through a directshow interface - DSInbuffer is the buffer containing the incoming samples
    av_init_packet(&g_pPacket);
    g_pPacket.data = nullptr;
    g_pPacket.size = 0;

    int gotpacket = 0;
    int ok = avcodec_encode_audio2(g_pContext,&g_pPacket,g_pFrame,&gotpacket);
    if((ok == 0) && gotpacket){
      // I then copy from g_pPacket.data an amount of g_pPacket.size bytes into another directshow interface buffer that sends the encoded sample downstream.

       av_free_packet(&g_pPacket);
    }

    Currently it will crash at the avcodec_encode_audio2 call. If I change the format parameter to AV_SAMPLE_FMT_FLTP in the avcodec_fill_audio_frame call then it doesnt crash but it only encodes 1 frame of data and i get error -22 on the next frame. The pPacket.size parameter is 1792 ( 7 * 256 ) after the first avcode_encode_audio2 call.

    As I’m new to ffmpeg I’m sure it will probably be something quite straight forward that I’ve missed or I’m misunderstanding, and I’m confused at where the parameters for the incoming samples are and the outgoing samples.

    This is obviously and extract from the main function that I’ve created, and I’ve manually typed into the forum. If there are spelling mistakes that are by mistake here, then original code compiles and runs.

    Dave.

  • cpu.c:253 : x264_cpu_detect : Assertion

    12 octobre 2017, par user6341251

    environment :
    ubuntu 16.04_x64 server
    install ffmpeg through apt-get install
    python 3

    when I try

    from moviepy.editor import *
    clip = VideoFileClip("/root/video.mp4")
    clip.ipython_display(width=280)

    Traceback (most recent call last) :
    File "", line 1, in
    File "/usr/local/lib/python2.7/dist-packages/moviepy/video/io/html_tools.py", line 219, in ipython_display
    center=center, rd_kwargs=rd_kwargs, **html_kwargs))
    File "/usr/local/lib/python2.7/dist-packages/moviepy/video/io/html_tools.py", line 97, in html_embed
    clip.write_videofile(**kwargs)
    File "", line 2, in write_videofile
    File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 54, in requires_duration
    return f(clip, *a, **k)
    File "", line 2, in write_videofile
    File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 137, in use_clip_fps_by_default
    return f(clip, *new_a, **new_kw)
    File "", line 2, in write_videofile
    File "/usr/local/lib/python2.7/dist-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB
    return f(clip, *a, **k)
    File "/usr/local/lib/python2.7/dist-packages/moviepy/video/VideoClip.py", line 349, in write_videofile
    progress_bar=progress_bar)
    File "/usr/local/lib/python2.7/dist-packages/moviepy/video/io/ffmpeg_writer.py", line 216, in ffmpeg_write_video
    writer.write_frame(frame)
    File "/usr/local/lib/python2.7/dist-packages/moviepy/video/io/ffmpeg_writer.py", line 178, in write_frame
    raise IOError(error)
    IOError : [Errno 32] Broken pipe

    MoviePy error : FFMPEG encountered the following error while writing file temp.mp4 :

    ffmpeg : common/cpu.c:253 : x264_cpu_detect : Assertion ` !(cpu&(0x0000040|0x0000080))’ failed.

    what happend ?


    @Ronald S. Bultje

    I am using a virtual machine

    processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 13
    model name : QEMU Virtual CPU version (cpu64-rhel6)
    stepping : 3
    microcode : 0x1
    cpu MHz : 3504.000
    cache size : 4096 KB
    physical id : 0
    siblings : 1
    core id : 0
    cpu cores : 1
    apicid : 0
    initial apicid : 0
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 xsaveopt
    bugs :
    bogomips : 7008.00
    clflush size : 64
    cache_alignment : 64
    address sizes : 39 bits physical, 48 bits virtual
    power management :