Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (21)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (5127)

  • FFmpeg what is the correct way to manually write silence through pipe:0 ?

    19 juillet 2023, par Bohdan Petrenko

    I have an ffmpeg process running with this parameters :

    


    ffmpeg -y -f s16le -ac {Channels} -ar 48000 -re -use_wallclock_as_timestamps true -i pipe:0 -f segment -segment_time {_segmentSize} -segment_list \"{_segmentListPath}\" -segment_format mp3 -segment_wrap 2 -reset_timestamps 0 -af aresample=async=1 \"{_filePath}\"


    


    I also have a DateTimeOffsetwhich represents the time when the recording was started. When an FFMpeg process is created, I need to add some some amount of silence that equals to the delay between current time and when the recording was started. This delay may be bigger than ffmpeg segments, so I calculate it relatively to the time when last ffmpeg segment should begin.
I store silence in a static byte array with length of two ffmpeg segments :

    


    _silenceBuffer ??= new byte[_segmentSize * 2 * Channels * SampleRate * 2];


    


    I tried two ways of writing silence :

    


    First code I tried is this :

    


    var delay = DateTimeOffset.UtcNow - RecordingStartDateTime;

var time = CalculateRelativeMilliseconds(delay.TotalMilliseconds); // this returns time based on current segment. It works fine.

var amount = (int)(time * 2 * Channels * SampleRate / 1000);

WriterStream.Write(_silenceBuffer, 0, amount);


    


    As the result, I have a very loud noise everywhere in output from ffmpeg. It brokes audio, so this way doesn't work for me.

    


    Second code I tried is this :

    


    var delay = DateTimeOffset.UtcNow - RecordingStartDateTime;

var time = CalculateRelativeMilliseconds(delay.TotalMilliseconds); // this returns time based on current segment. It works fine.

var amount = (int)time * 2 * Channels * SampleRate / 1000;

WriterStream.Write(_silenceBuffer, 0, amount);


    


    Difference between first and second code is that now I cast only time to int type, not the result of the whole expression. But it also doesn't work. This time at the beginning I have no silence I wrote, the recording begins with voice data I piped after writing silence. But if I use this ffmpeg command :

    


    ffmpeg -y -f s16le -ac {Channels} -ar 48000 -i pipe:0 -f segment -segment_time {_segmentSize} -segment_list \"{_segmentListPath}\" -segment_format mp3 -segment_wrap 2 -reset_timestamps 0 \"{_filePath}\"


    


    Then it works as expected. Recording begins with silence what I need, and then goes voice data I piped.

    


    So, how can I manually calculate and write silence to my ffmpeg instance ? Is there some universal way of writing and calculating silence that will work with any ffmpeg command ? I don`t want to use filters and other ffmpeg instances for offsetting piped voice data, because I do it only once per session. I think that I can write silence with byte arrays. I look forward to any suggestions.

    


  • avformat_seek_file timestamps not using the correct time base

    19 juin 2021, par Charlie

    I am in the process of creating a memory loader for ffmpeg to add more functionality. I have audio playing and working, but am having an issue with avformat_seek_file timestamps using the wrong format.

    


    avformat.avformat_seek_file(file.context, -1, 0, timestamp, timestamp, 0)


    


    From looking at the docs it says if the stream index is -1 that the time should be based on AV_TIME_BASE. When I load the file through avformat_open_input with a null AVFormatContext and a filename, this works as expected.

    


    However when I create my own AVIOContext and AVFormatContext through avio_alloc_context and avformat_alloc_context respectively, the timestamps are no longer based on AV_TIME_BASE. When testing I received an access violation when I first tried seeking, and upon investigating, it seems that the timestamps are based on actual seconds now. How can I make these custom contexts time based on AV_TIME_BASE ?

    


    The only difference between the two are the custom loading of AVIOContext and AVFormatContext :

    


        data = fileobject.read()

    ld = len(data)

    buf = libavutil.avutil.av_malloc(ld)
    ptr_buf = cast(buf, c_char_p)

    ptr = ctypes.create_string_buffer(ld)
    memmove(ptr, data, ld)

    seeker = libavformat.ffmpeg_seek_func(seek_data)
    reader = libavformat.ffmpeg_read_func(read_data)
    writer = libavformat.ffmpeg_read_func(write_data)

    format = libavformat.avformat.avio_alloc_context(ptr_buf, buf_size, 0,
                                                     ptr_data,
                                                     reader,
                                                     writer,
                                                     seeker
                                                     )

    file.context = libavformat.avformat.avformat_alloc_context()
    file.context.contents.pb = format
    file.context.contents.flags |= AVFMT_FLAG_CUSTOM_IO

    result = avformat.avformat_open_input(byref(file.context),
                                          b"",
                                          None,
                                          None)

    if result != 0:
        raise FFmpegException('avformat_open_input in ffmpeg_open_filename returned an error opening file '
                              + filename.decode("utf8")
                              + ' Error code: ' + str(result))

    result = avformat.avformat_find_stream_info(file.context, None)
    if result < 0:
        raise FFmpegException('Could not find stream info')

    return file



    


    Here is the filename code that does work :

    


        result = avformat.avformat_open_input(byref(file.context),
                                          filename,
                                          None,
                                          None)
    if result != 0:
        raise FFmpegException('avformat_open_input in ffmpeg_open_filename returned an error opening file '
                              + filename.decode("utf8")
                              + ' Error code: ' + str(result))

    result = avformat.avformat_find_stream_info(file.context, None)
    if result < 0:
        raise FFmpegException('Could not find stream info')

    return file


    


    I am new to ffmpeg, but any help fixing this discrepancy is greatly appreciated.

    


  • Go / Cgo - How to access a field of a Cstruct - could not make it

    15 août 2021, par ChrisG

    I develope an application in Go for transcode an audio file from one format to another one :

    


    I use the goav library that use Cgo to bind the FFmpeg C-libs :
https://github.com/giorgisio/goav/

    



    


    The goav library ; package avformat has a typedef that cast the original FFmpeg lib C-Struct AVOutputFormat :

    


    type ( 
   OutputFormat               C.struct_AVOutputFormat
)


    


    In my code i have a variable called outputF of the type OutputFormat that is a C.struct_AVOutputFormat.

    


    The C real AVOutputFormat struct has fields :

    


    name, long_name, mime_type, extensions, audio_codec, video_codec, subtitle_codec,..


    


    and many fields more.

    


    See : https://ffmpeg.org/doxygen/2.6/structAVOutputFormat.html

    



    


    I verified the situation by fmt.Println(outputF) and reached :

    


    {0x7ffff7f23383 0x7ffff7f23907 0x7ffff7f13c33 0x7ffff7f23383 86017 61 0 128 <nil> 0x7ffff7f8cfa0 <nil> 3344 0x7ffff7e3ec10 0x7ffff7e3f410 0x7ffff7e3ecc0 <nil> 0x7ffff7e3dfc0 <nil> <nil> <nil> <nil> <nil> <nil> 0 0x7ffff7e3e070 0x7ffff7e3e020 <nil>}&#xA;</nil></nil></nil></nil></nil></nil></nil></nil></nil></nil>

    &#xA;

    The audio codec field is on position 5 and contains 86017

    &#xA;

    I verified the field name by using the package reflect :

    &#xA;

    val := reflect.Indirect(reflect.ValueOf(outputF))&#xA;fmt.Println(val)&#xA;fmt.Println("Fieldname: ", val.Type().Field(4).Name)&#xA;&#xA;Output:&#xA;Fieldname:  audio_codec&#xA;

    &#xA;


    &#xA;

    I try to access the field audio_codec of the original AVOutputFormat using :

    &#xA;

    fmt.Println(outputF.audio_codec)&#xA;ERROR: outputF.audio_codec undefined (cannot refer to unexported field or method audio_codec)&#xA;&#xA;&#xA;fmt.Println(outputF._audio_codec)&#xA;ERROR: outputF._audio_codec undefined (type *avformat.OutputFormat has no field or method _audio_codec)&#xA;

    &#xA;

    &#xA;

    As i read in the Cgo documentation :&#xA;Within the Go file, C's struct field names that are keywords in Go can be accessed by prefixing them with an underscore : if x points at a C struct with a field named "type", x._type accesses the field. C struct fields that cannot be expressed in Go, such as bit fields or misaligned data, are omitted in the Go struct, replaced by appropriate padding to reach the next field or the end of the struct.

    &#xA;

    &#xA;

    But I have no idea what im doing wrong.

    &#xA;

    Edit :&#xA;Okay for sure no underscore is required as audio_codec is not a keyword in Go. This i understood for now. But still there is the question why im not able to access the CStruct field "audio_codec".

    &#xA;