
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (56)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5045)
-
Python convert binary data (str) to match multiple of float32 size
15 avril 2016, par Kendall WeiheI’m working on a tensorflow project, and I’m working with raw data from a mp3 file. I’m using
FFMPEG
and capturingSTDOUT
.FFMPEG
outputs binary data in typestr
but it doesn’t capture the data in a multiple of the size of afloat32
...or apparently that’s what I am assuming from the error I get from the following code...import subprocess as sp
command = ["ffmpeg",
'-i', image_file,
'-f', 's16le',
'-acodec', 'pcm_s16le',
'-ar', '44100', # ouput will have 44100 Hz
'-ac', '2', # stereo (set to '1' for mono)
'-']
pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.STDOUT, stdin=sp.PIPE)
raw_audio, _ = pipe.communicate(str(88200*4).encode())
audio_array = np.fromstring(raw_audio, dtype="float32") //ERROR
dataset = np.append(dataset,audio_array)Error
File "main.py", line 68, in maybe_pickle
audio_array = np.fromstring(raw_audio, dtype="float32")
ValueError: string size must be a multiple of element sizeraw_audio
is in the form :...7k\x01t\xf9\xc3\x01\xe3\xfa\x11\x02\xfb\xfb~\xff\x90\xfb\xbb\xfc\x83\xfa\xb4\xfd\xde\xfb\xab\xff\x01\xff\xd4\xfe:\x00T\xfd\xe0\xff\xae\xfdV\x01\xd8\xfd\x1a\x04\x0f\xfcR\x05,\xfaG\x05\xb1\xfaD\x05\xa1\xfdP\x04e\x00\xbc\x01\xe6\x00\xdf\xfe`\x00\x16\xfd\xae\x00\xec\xfc\xb7\x00\x7f\...
Is there a way I can truncate the data ? Or maybe capture the string in an entirely different way ?
-
How can I reencode a video to match another's codec exactly ?
24 janvier 2020, par Stephen SchraugerWhen I’m on vacation, I usually use our camcorder to record videos. Since they’re all the same format, I can use ffmpeg to concat them into one large, smooth video without re-encoding.
However, sometimes I will use a phone or other camera to record a video (if the camcorder ran out of space/battery or was left at a hotel).
I’d like to determine the codec, framerate, etc used by my camcorder and use those parameters to convert the phone vidoes into the same format. That way, I will be able to concatonate all the videos without re-encoding the camcorder videos.
Using ffprobe, I found my camcorder has this encoding :
Input #0, mpegts, from 'camcorderfile.MTS':
Duration: 00:00:09.54, start: 1.936367, bitrate: 24761 kb/s
Program 1
Stream #0:0[0x1011]: Video: h264 (High) (HDPR / 0x52504448), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:1[0x1100]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 256 kb/s
Stream #0:2[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080The phone (iPhone 5s) encoding is :
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'mov.MOV':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2017-01-02T03:04:05.000000Z
com.apple.quicktime.location.ISO6709: +12.3456-789.0123+456.789/
com.apple.quicktime.make: Apple
com.apple.quicktime.model: iPhone 5s
com.apple.quicktime.software: 10.2.1
com.apple.quicktime.creationdate: 2017-01-02T03:04:05-0700
Duration: 00:00:14.38, start: 0.000000, bitrate: 11940 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 11865 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
encoder : H.264
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 63 kb/s (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data HandlerI’m presuming that ffmpeg will automatically take any acceptable video format, and that I only need to figure out the output settings. I think I need to use
-s 1920x1080
and-pix_fmt yuv420p
for the output, but what other flags do I need in order to make the phone video into the same encoding as the camcorder video ?Can I get some pointers as to how I can translate the ffprobe output into the flags I need to give to ffmpeg ?
Edit : Added the entire Input #0 for both media files.
-
Revision 6c280c2299 : Adjust style to match Google Coding Style a little more closely. Most of these
1er novembre 2012, par Ronald S. BultjeChanged Paths : Modify /vp8/common/arm/arm_systemdependent.c Modify /vp8/common/debugmodes.c Modify /vp8/common/entropymode.c Modify /vp8/common/entropymode.h Modify /vp8/common/extend.c Modify /vp8/common/extend.h Modify /vp8/common/filter.c (...)