
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (21)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (5127)
-
FFmpeg what is the correct way to manually write silence through pipe:0 ?
19 juillet 2023, par Bohdan PetrenkoI have an ffmpeg process running with this parameters :


ffmpeg -y -f s16le -ac {Channels} -ar 48000 -re -use_wallclock_as_timestamps true -i pipe:0 -f segment -segment_time {_segmentSize} -segment_list \"{_segmentListPath}\" -segment_format mp3 -segment_wrap 2 -reset_timestamps 0 -af aresample=async=1 \"{_filePath}\"



I also have a
DateTimeOffset
which represents the time when the recording was started. When an FFMpeg process is created, I need to add some some amount of silence that equals to the delay between current time and when the recording was started. This delay may be bigger than ffmpeg segments, so I calculate it relatively to the time when last ffmpeg segment should begin.
I store silence in a static byte array with length of two ffmpeg segments :

_silenceBuffer ??= new byte[_segmentSize * 2 * Channels * SampleRate * 2];



I tried two ways of writing silence :


First code I tried is this :


var delay = DateTimeOffset.UtcNow - RecordingStartDateTime;

var time = CalculateRelativeMilliseconds(delay.TotalMilliseconds); // this returns time based on current segment. It works fine.

var amount = (int)(time * 2 * Channels * SampleRate / 1000);

WriterStream.Write(_silenceBuffer, 0, amount);



As the result, I have a very loud noise everywhere in output from ffmpeg. It brokes audio, so this way doesn't work for me.


Second code I tried is this :


var delay = DateTimeOffset.UtcNow - RecordingStartDateTime;

var time = CalculateRelativeMilliseconds(delay.TotalMilliseconds); // this returns time based on current segment. It works fine.

var amount = (int)time * 2 * Channels * SampleRate / 1000;

WriterStream.Write(_silenceBuffer, 0, amount);



Difference between first and second code is that now I cast only
time
toint
type, not the result of the whole expression. But it also doesn't work. This time at the beginning I have no silence I wrote, the recording begins with voice data I piped after writing silence. But if I use this ffmpeg command :

ffmpeg -y -f s16le -ac {Channels} -ar 48000 -i pipe:0 -f segment -segment_time {_segmentSize} -segment_list \"{_segmentListPath}\" -segment_format mp3 -segment_wrap 2 -reset_timestamps 0 \"{_filePath}\"



Then it works as expected. Recording begins with silence what I need, and then goes voice data I piped.


So, how can I manually calculate and write silence to my ffmpeg instance ? Is there some universal way of writing and calculating silence that will work with any ffmpeg command ? I don`t want to use filters and other ffmpeg instances for offsetting piped voice data, because I do it only once per session. I think that I can write silence with byte arrays. I look forward to any suggestions.


-
avformat_seek_file timestamps not using the correct time base
19 juin 2021, par CharlieI am in the process of creating a memory loader for ffmpeg to add more functionality. I have audio playing and working, but am having an issue with
avformat_seek_file
timestamps using the wrong format.

avformat.avformat_seek_file(file.context, -1, 0, timestamp, timestamp, 0)



From looking at the docs it says if the stream index is -1 that the time should be based on
AV_TIME_BASE
. When I load the file throughavformat_open_input
with a nullAVFormatContext
and a filename, this works as expected.

However when I create my own
AVIOContext
andAVFormatContext
throughavio_alloc_context
andavformat_alloc_context
respectively, the timestamps are no longer based onAV_TIME_BASE
. When testing I received an access violation when I first tried seeking, and upon investigating, it seems that the timestamps are based on actual seconds now. How can I make these custom contexts time based onAV_TIME_BASE
?

The only difference between the two are the custom loading of
AVIOContext
andAVFormatContext
:

data = fileobject.read()

 ld = len(data)

 buf = libavutil.avutil.av_malloc(ld)
 ptr_buf = cast(buf, c_char_p)

 ptr = ctypes.create_string_buffer(ld)
 memmove(ptr, data, ld)

 seeker = libavformat.ffmpeg_seek_func(seek_data)
 reader = libavformat.ffmpeg_read_func(read_data)
 writer = libavformat.ffmpeg_read_func(write_data)

 format = libavformat.avformat.avio_alloc_context(ptr_buf, buf_size, 0,
 ptr_data,
 reader,
 writer,
 seeker
 )

 file.context = libavformat.avformat.avformat_alloc_context()
 file.context.contents.pb = format
 file.context.contents.flags |= AVFMT_FLAG_CUSTOM_IO

 result = avformat.avformat_open_input(byref(file.context),
 b"",
 None,
 None)

 if result != 0:
 raise FFmpegException('avformat_open_input in ffmpeg_open_filename returned an error opening file '
 + filename.decode("utf8")
 + ' Error code: ' + str(result))

 result = avformat.avformat_find_stream_info(file.context, None)
 if result < 0:
 raise FFmpegException('Could not find stream info')

 return file




Here is the filename code that does work :


result = avformat.avformat_open_input(byref(file.context),
 filename,
 None,
 None)
 if result != 0:
 raise FFmpegException('avformat_open_input in ffmpeg_open_filename returned an error opening file '
 + filename.decode("utf8")
 + ' Error code: ' + str(result))

 result = avformat.avformat_find_stream_info(file.context, None)
 if result < 0:
 raise FFmpegException('Could not find stream info')

 return file



I am new to ffmpeg, but any help fixing this discrepancy is greatly appreciated.


-
Go / Cgo - How to access a field of a Cstruct - could not make it
15 août 2021, par ChrisGI develope an application in Go for transcode an audio file from one format to another one :


I use the goav library that use Cgo to bind the FFmpeg C-libs :
https://github.com/giorgisio/goav/



The goav library ;
package avformat
has a typedef that cast the original FFmpeg lib C-Struct AVOutputFormat :

type ( 
 OutputFormat C.struct_AVOutputFormat
)



In my code i have a variable called
outputF
of the typeOutputFormat
that is aC.struct_AVOutputFormat
.

The
C
realAVOutputFormat
struct has fields :

name, long_name, mime_type, extensions, audio_codec, video_codec, subtitle_codec,..



and many fields more.


See : https://ffmpeg.org/doxygen/2.6/structAVOutputFormat.html



I verified the situation by
fmt.Println(outputF)
and reached :

{0x7ffff7f23383 0x7ffff7f23907 0x7ffff7f13c33 0x7ffff7f23383 86017 61 0 128 <nil> 0x7ffff7f8cfa0 <nil> 3344 0x7ffff7e3ec10 0x7ffff7e3f410 0x7ffff7e3ecc0 <nil> 0x7ffff7e3dfc0 <nil> <nil> <nil> <nil> <nil> <nil> 0 0x7ffff7e3e070 0x7ffff7e3e020 <nil>}
</nil></nil></nil></nil></nil></nil></nil></nil></nil></nil>


The audio codec field is on position
5
and contains86017


I verified the field name by using the package
reflect
:

val := reflect.Indirect(reflect.ValueOf(outputF))
fmt.Println(val)
fmt.Println("Fieldname: ", val.Type().Field(4).Name)

Output:
Fieldname: audio_codec




I try to access the field
audio_codec
of the originalAVOutputFormat
using :

fmt.Println(outputF.audio_codec)
ERROR: outputF.audio_codec undefined (cannot refer to unexported field or method audio_codec)


fmt.Println(outputF._audio_codec)
ERROR: outputF._audio_codec undefined (type *avformat.OutputFormat has no field or method _audio_codec)





As i read in the Cgo documentation :
Within the Go file, C's struct field names that are keywords in Go can be accessed by prefixing them with an underscore : if x points at a C struct with a field named "type", x._type accesses the field. C struct fields that cannot be expressed in Go, such as bit fields or misaligned data, are omitted in the Go struct, replaced by appropriate padding to reach the next field or the end of the struct.




But I have no idea what im doing wrong.


Edit :
Okay for sure no underscore is required as audio_codec is not a keyword in Go. This i understood for now. But still there is the question why im not able to access the CStruct field "audio_codec".