
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (49)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (10004)
-
saving ffmpeg output in a file field in django using tempfiles
27 janvier 2017, par StarLordI am trying to extract audio from video when uploaded, and store it in a different model.
this is the task code :
def set_metadata(input_file, video_id):
"""
Takes the file path and video id, sets the metadata and audio
:param input_file: file url or file path
:param video_id: video id of the file
:return: True
"""
# extract metadata
command = [FFPROBE_BIN,
'-v', 'quiet',
'-print_format', 'json',
'-show_format',
'-show_streams',
'-select_streams', 'v:0',
input_file]
output = sp.check_output(command)
output = output.decode('utf-8')
metadata_output = json.loads(output)
video_instance = Video.objects.get(pk=video_id)
stream = metadata_output['streams'][0]
tot_frames_junk = int(stream['avg_frame_rate'].split("/")[0])
tot_time_junk = int(stream['avg_frame_rate'].split("/")[1])
# update the video model with newly acquired metadata
video_instance.width = stream['width']
video_instance.height = stream['height']
video_instance.frame_rate = tot_frames_junk/tot_time_junk
video_instance.duration = stream['duration']
video_instance.total_frames = stream['nb_frames']
video_instance.video_codec = stream['codec_name']
video_instance.save()
# extract audio
tmpfile = temp.NamedTemporaryFile(suffix='.mp2')
command = ['ffmpeg',
'-y',
'-i', input_file,
'-loglevel', 'quiet',
'-acodec', 'copy',
tmpfile.name]
sp.check_output(command)
audio_obj = Audio.objects.create(title='awesome', audio=tmpfile)
audio_obj.save()I am passing video object id and the file path to the method. The file path is a url from azure storage.
The part where I am processing for metadata, it works and the video object gets updated. But the audio model is not created.
It throws this error :
AttributeError("'_io.BufferedRandom' object has no attribute '_committed'",)
at command :
audio_obj = Audio.objects.create(title='awesome', audio=tmpfile)
What am I doing wrong with the temp file.
-
avfilter/avfilter : add a side_data field to AVFilterLink
25 mars 2024, par James Almer -
Lossless codec for bayer data
21 août 2016, par vhdirkI’m working with lots of camera’s which capture in BG bayer pattern natively.
Now, every time I record some data, I save it to the disk in the raw bayer pattern, in an avi container. The problem is, that this really adds up after a while. After one year of research, I have close to 4TB of data...
So I’m looking for a lossless codec to compress this data. I know I could use libx264 (with —qp 0), or huffYUV, dirac or jpeg2000, but they all assume you have RGB or YUV data. It’s easy enough to convert the bayered data to RGB, and then compress it, but it kind of defeats the purpose of compression if you first triple the data. This would also mean that the demoasicing artefacts introduced by debayering would also be in my source data, which is also not too great. It would be nice to have a codec that can work on the bayered data directly.
Even more nice would be that the solution would involve a codec that is already supported by gstreamer (or ffmpeg), since that’s what I am already using.