
Recherche avancée
Médias (1)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
Autres articles (88)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (8014)
-
Error using PyDub : pcm_s8 codec not supported in WAVE format
29 mai 2020, par thf9527Tried to read in a WAVE file.



Pydub has been working perfectly for the past few months, until I have encountered a specific WAVE file format where I could not import into Python (but it has no issue while playing on windows media player and other players).



from pydub import AudioSegment

file = r"NICE_Dev.wav"
print(utils.mediainfo(file))
try:
 data = AudioSegment.from_file(file)

except Exception as e:
 print(e)




The error message is :



ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 8.3.1 (GCC) 20190414
 configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
 libavutil 56. 22.100 / 56. 22.100
 libavcodec 58. 35.100 / 58. 35.100
 libavformat 58. 20.100 / 58. 20.100
 libavdevice 58. 5.100 / 58. 5.100
 libavfilter 7. 40.101 / 7. 40.101
 libswscale 5. 3.100 / 5. 3.100
 libswresample 3. 3.100 / 3. 3.100
 libpostproc 55. 3.100 / 55. 3.100
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, wav, from 'NICE_Dev.wav':
 Duration: 00:25:55.39, bitrate: 64 kb/s
 Stream #0:0: Audio: pcm_mulaw ([7][0][0][0] / 0x0007), 8000 Hz, mono, s16, 64 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (pcm_mulaw (native) -> pcm_s8 (native))
Press [q] to stop, [?] for help
[wav @ 00000265db3c1bc0] pcm_s8 codec not supported in WAVE format
Could not write header for output file #0 (incorrect codec parameters ?): Function not implemented
Error initializing output stream 0:0 -- 
Conversion failed!




I believe it is due to the encoding pcm_s8, but I could not figure out how to solve this issue, the details of the audio file using "utils.mediainfo" is :



{'index': '0', 'codec_name': 'pcm_mulaw', 'codec_long_name': 'PCM mu-law / 
G.711 mu-law', 'profile': 'unknown', 'codec_type': 'audio', 'codec_time_base': 
'1/8000', 'codec_tag_string': '[7][0][0][0]', 'codec_tag': '0x0007', 
'sample_fmt': 's16', 'sample_rate': '8000', 'channels': '1', 'channel_layout': 
'unknown', 'bits_per_sample': '8', 'id': 'N/A', 'r_frame_rate': '0/0', 
'avg_frame_rate': '0/0', 'time_base': '1/8000', 'start_pts': 'N/A', 
'start_time': 'N/A', 'duration_ts': '12443128', 'duration': '1555.391000', 
'bit_rate': '64000', 'max_bit_rate': 'N/A', 'bits_per_raw_sample': 'N/A', 
'nb_frames': 'N/A', 'nb_read_frames': 'N/A', 'nb_read_packets': 'N/A', 
'DISPOSITION': {'default': '0', 'dub': '0', 'original': '0', 'comment': '0', 
'lyrics': '0', 'karaoke': '0', 'forced': '0', 'hearing_impaired': '0', 
'visual_impaired': '0', 'clean_effects': '0', 'attached_pic': '0', 
'timed_thumbnails': '0'}, 'filename': 'NICE_Dev.wav', 'nb_streams': '1', 
'nb_programs': '0', 'format_name': 'wav', 'format_long_name': 'WAV / WAVE 
(Waveform Audio)', 'size': '12443174', 'probe_score': '99'}



-
Output file #0 does not contain any stream , Webm VP9 live streaming
30 août 2019, par SalemThe source video I use is H264 m3u8 live stream and this is the command I tried
ffmpeg -re -i "http://sorce.com/live.m3u8" -c:v libvpx-vp9 -s 480x360 -keyint_min 60\
-g 60 -speed 5 -tile-columns 4 -frame-parallel 1 -threads 8 -static-thresh 0 \
-max-intra-rate 300 -deadline realtime -lag-in-frames 0 -error-resilient 1 \
-b:v 300k -c:a libvorbis -b:a 64k -ar 44100 -f webm_chunk -audio_chunk_duration 2000 \
-header "/var/www/example.com/live/glass_360.hdr" -chunk_start_index 1 \
/var/www/example.com/live/glass_360_%d.chkI pickup this code from wkiki.webmproject.org
this is command full logffmpeg version N-94564-gaac382e Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
configuration: --prefix=/root/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/root/bin --enable-gpl --enable-libaom --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libfribidi --enable-nonfree
libavutil 56. 33.100 / 56. 33.100
libavcodec 58. 55.100 / 58. 55.100
libavformat 58. 30.100 / 58. 30.100
libavdevice 58. 9.100 / 58. 9.100
libavfilter 7. 58.100 / 7. 58.100
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, mpegts, from 'http://sorce.com/live.m3u8':
Duration: N/A, start: 3860.014278, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1600x900 [SAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 100 tbc
Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 121 kb/s
Output #0, webm_chunk, to '/var/www/example.com/live/glass_360_%d.chk':
Output file #0 does not contain any streammost of the time I got this error message
[libvorbis @ 0x5617bae0c240] encoder setup failed Error initializing
output stream 0:1 -- Error while opening encoder for output stream
#0:1 - maybe incorrect parameters such as bit_rate, rate, width or heightHere is FFMPEG command output
Input #0, mpegts, from 'http://sorce.com/live.m3u8':
Duration: N/A, start: 1369.000978, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, mono, fltp, 127 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> vp9 (libvpx-vp9))
Stream #0:1 -> #0:1 (aac (native) -> vorbis (libvorbis))I already add video map before video and audio coding argue but I got same error
-map 0:v:0 -c:v libvpx-vp9 and -map 0:a:0 -c:a libvorbis
I tried new broadcast command it works first after I disable the audio
VP9_DASH_PARAMS="-tile-columns 4 -frame-parallel 1 -speed 6" && \
ffmpeg -y -re -i http://sorce.com/live.m3u8 -c:v libvpx-vp9 -s 480x360 -b:v 150k \
-keyint_min 150 -g 150 ${VP9_DASH_PARAMS} -an -f webm -dash 1 \
video_1280x720_500k.webm && sleep 1 && ffmpeg -f webm_dash_manifest \
-i video_1280x720_500k.webm -c copy -f webm_dash_manifest - \ adaptation_sets "id=0" manifest.mpdSource
this command didn’t createmanifest.mpd
it created onlyvideo_1280x720_500k.webm
==Edit==
I write new command (exactly as web wiki said )and everything works perfectly now
VP9_LIVE_PARAMS=" -tile-columns 4 -frame-parallel 1 -threads 4 -static-thresh 0 -max-intra-rate 300 -deadline good -speed 5 -slices 0 -row-mt 1 -lag-in-frames 0 -error-resilient 1" ; \
ffmpeg -loglevel 0 -re -i http://sorce.com/live.m3u8 \
-map 0:v -pix_fmt yuv420p -c:v libvpx-vp9 -s 480x360 -r 23.976 -keyint_min 60 -g 60 ${VP9_LIVE_PARAMS} -b:v 300k \
-f webm_chunk -header "/var/www/example.com/live/glass_360.hdr" -chunk_start_index 1 \
/var/www/example.com/live/glass_360_%d.chk \
-map 0:a -c:a libvorbis -b:a 64k \
-f webm_chunk -audio_chunk_duration 2000 -header /var/www/example.com/live/glass_171.hdr \
-chunk_start_index 1 /var/www/example.com/live/glass_171_%d.chkthis is command to generate .mpd file
ffmpeg -y -f webm_dash_manifest -live 1 \
-i /var/www/example.com/live/glass_360.hdr \
-f webm_dash_manifest -live 1 \
-i /var/www/example.com/live/glass_171.hdr \
-c copy -map 0 -map 1 \
-f webm_dash_manifest -live 1 \
-adaptation_sets "id=0,streams=0 id=1,streams=1" \
-chunk_start_index 1 -chunk_duration_ms 2000 \
-time_shift_buffer_depth 7200 -minimum_update_period 7200 \
/var/www/example.com/live/glass_live_manifest.mpdand this is
webtest.html
file<code class="echappe-js"><script src='http://stackoverflow.com/feeds/tag/dash.all.min.js'></script>
-
Bash script doesn't read entire line
4 octobre 2019, par Miguel AlatorreFirst off, I am in the early stages of learning bash shell scripting, so I apologize if I say / do anything that doesn’t make sense.
Currently, I’m trying to have an SBC, a Khadas VIM3 specifically, run a python script to find and label faces in any given video from a local server. Currently, I need to reduce the frame rate and resolution of the video, which is where the bash script comes into play. I need to automate this process and thought I’d do it using a bash script and crontab.
The file paths are found and output into a file from a separate script, and are read by the bash script. The problem comes when I try and call ffmpeg to use the file paths.The Code :
pathFile="/home/khadas/Documents/paths"
while IFS= read -r line
do
ffmpeg -i "$line" -vf scale=960:540 -y "$line"
cp "$line" ./
done < $pathFileThe resulting error :
: No such file or directoryalRecognition/10/14-53.h264+/20-509-26-10-14-53.mp4
cp: cannot stat '/home/khadas/Downloads/FacialRecognition/10/14-53.h264+/20-509-26-10-14-53.mp4'$'\r': No such file or directoryExample of the paths file (There will be hundreds of entries) :
/home/khadas/Downloads/FacialRecognition/10/14-42.h264+/20-509-26-10-14-42.mp4
/home/khadas/Downloads/FacialRecognition/10/59-06.h264+/20-509-26-10-59-06.mp4
/home/khadas/Downloads/FacialRecognition/10/36-28.h264+/20-509-26-10-36-28.mp4
/home/khadas/Downloads/FacialRecognition/10/14-53.h264+/20-509-26-10-14-53.mp4When using a trimmed down version, the script works as expected. Could it be an issue with the length of the lines ? Any help is much appreciated.