
Recherche avancée
Autres articles (35)
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (4805)
-
FFMPEG in an AWS Lambda will only output 5 seconds of converted video [duplicate]
5 juin 2021, par beerandsmilesI've been looking for a solution for this issue, but I can't seem to find what's going wrong.


In short, I'm using an AWS Lambda to convert video captured from an raspberry pi in a raw .h264 format to .mp4. The problem is that the output file is always, only 5 seconds long.


So I input a video of say 500mb, that is 10 minutes long, and the output is an mp4 that is exactly the first 5 seconds of the source video.


The lambda has been setup following the tutorial from Amazon that is shown here :
https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/


It is triggered by an upload from one s3 buckets, transcodes, and puts it in a different bucket. The purpose is to store a high quality copy of the video that is smaller to save costs. (this is a personal project, so I'm paying personally)


I've put the full code of the lambda down below.
I had trouble using their recommended stdout method as that resulted in a file being created with a size of 0 bytes.


You'll see a few commented lines where I tried different things to solve it. I thought it best to leave that in while asking the questions so you can see what I've done. Firstly the method of using stdout piped directly into the output S3 did not work, so I stored the output file in lambda's /tmp directory.


However, when I first did this using the signed link as the input it gave me 5 seconds of the input video.


Thinking this had to do with an issue in the stream that FFMPEG was getting, I tried instead to download the file from the first S3 bucket into the temp folder, then convert it, and then upload it.


The actual FFMPEG command is quite simple


f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4"


But this outputs a 5 second video.


I have also tried using different versions of FFMPEG for the layer with lambda and no help. Also, I have set and execution timeout of 2 minutes with 2gb or ram for this lambda.


The last thing, is that running this command on a linux machine, such as a raspberry pi directly, results in an mp4 of the correct length, only in the lambda am I having this problem.


I'm completely lost, and I can't seem to find any documentation on this happening to anyone else.


import os
import subprocess
import shlex
import boto3
from time import sleep

S3_DESTINATION_BUCKET = "dashcam-duncan"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):
 print(event)
 os.chdir('/tmp')
 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + ".mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)
 print(s3_source_signed_url)
 s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
 # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
 ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
 # command1 = shlex.split(ffmpeg_cmd)
 # print(command1)
 os.system(ffmpeg_cmd)
 # os.system('ls')
 # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 file = 'output.mp4'
 resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 s3 = boto3.resource('s3')
 s3.Object(s3_source_bucket,s3_source_key).delete()
 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



The cloudwatch logs on the last execution of this :


built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
Input #0, h264, from 'video00087.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1280x720, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x6aaf500] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x6aaf500] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x6aaf500] 264 - core 161 r3048 b86ae3c - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(progressive), 1280x720, q=2-31, 25 fps, 12800 tbn
Metadata:
encoder : Lavc58.134.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 47 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 56 fps= 44 q=28.0 size= 0kB time=00:00:00.24 bitrate= 1.6kbits/s speed=0.187x 
frame= 65 fps= 35 q=28.0 size= 0kB time=00:00:00.60 bitrate= 0.6kbits/s speed=0.325x 
frame= 74 fps= 31 q=28.0 size= 0kB time=00:00:00.96 bitrate= 0.4kbits/s speed=0.399x 
Enter command: <target>|all <time>|-1 <command>[ <argument>]
Parse error, at least 3 arguments were expected, only 1 given in string 'V����Ҿ�#I���bv��oF��LxE��{��y5Jx�X�-f?2k�E~ہ��L��Y?�w���9?S�?�(q?��y��V8�=)�9'�?�-j?��?�3���Ŧ$��r���\��r}?zb?E��?��B}b4��2��[z�&�逋�Qk�ar�=y���'
frame= 82 fps= 28 q=28.0 size= 256kB time=00:00:01.28 bitrate=1638.6kbits/s speed=0.434x 
frame= 90 fps= 25 q=28.0 size= 256kB time=00:00:01.60 bitrate=1310.9kbits/s speed=0.442x 
frame= 98 fps= 23 q=28.0 size= 256kB time=00:00:01.92 bitrate=1092.4kbits/s speed=0.458x 
frame= 107 fps= 23 q=28.0 size= 256kB time=00:00:02.28 bitrate= 919.9kbits/s speed=0.48x 
frame= 115 fps= 22 q=28.0 size= 512kB time=00:00:02.60 bitrate=1613.3kbits/s speed=0.495x 
frame= 122 fps= 21 q=28.0 size= 512kB time=00:00:02.88 bitrate=1456.4kbits/s speed=0.499x 
[h264 @ 0x6b68c80] left block unavailable for requested intra mode
[h264 @ 0x6b68c80] error while decoding MB 0 19, bytestream 37403
[h264 @ 0x6b68c80] concealing 2129 DC, 2129 AC, 2129 MV errors in P frame
video00087.h264: corrupt decoded frame in stream 0
[h264 @ 0x6ab4080] left block unavailable for requested intra4x4 mode -1
[h264 @ 0x6ab4080] error while decoding MB 0 37, bytestream 13222
[h264 @ 0x6ab4080] concealing 689 DC, 689 AC, 689 MV errors in P frame
video00087.h264: corrupt decoded frame in stream 0
[h264 @ 0x6b68c80] concealing 1347 DC, 1347 AC, 1347 MV errors in P frame
frame= 130 fps= 21 q=28.0 size= 512kB time=00:00:03.20 bitrate=1310.8kbits/s speed=0.509x 
video00087.h264: corrupt decoded frame in stream 0
frame= 131 fps= 15 q=-1.0 Lsize= 1081kB time=00:00:05.12 bitrate=1729.6kbits/s speed=0.575x 
video:1079kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.220914%
[libx264 @ 0x6aaf500] frame I:1 Avg QP:21.61 size: 37761
[libx264 @ 0x6aaf500] frame P:34 Avg QP:22.25 size: 18066
[libx264 @ 0x6aaf500] frame B:96 Avg QP:24.46 size: 4706
[libx264 @ 0x6aaf500] consecutive B-frames: 2.3% 0.0% 0.0% 97.7%
[libx264 @ 0x6aaf500] mb I I16..4: 15.2% 61.2% 23.6%
[libx264 @ 0x6aaf500] mb P I16..4: 8.4% 15.6% 1.2% P16..4: 39.2% 13.7% 6.9% 0.0% 0.0% skip:15.0%
[libx264 @ 0x6aaf500] mb B I16..4: 0.7% 1.8% 0.0% B16..8: 44.5% 4.5% 0.5% direct: 3.6% skip:44.4% L0:46.9% L1:48.0% BI: 5.1%
[libx264 @ 0x6aaf500] 8x8 transform intra:63.5% inter:83.1%
[libx264 @ 0x6aaf500] coded y,uvDC,uvAC intra: 22.1% 25.4% 2.8% inter: 11.6% 19.3% 1.2%
[libx264 @ 0x6aaf500] i16 v,h,dc,p: 4% 63% 8% 25%
[libx264 @ 0x6aaf500] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 9% 26% 53% 1% 2% 1% 3% 1% 3%
[libx264 @ 0x6aaf500] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 44% 16% 4% 4% 3% 5% 4% 4%
[libx264 @ 0x6aaf500] i8c dc,h,v,p: 66% 24% 9% 1%
[libx264 @ 0x6aaf500] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x6aaf500] ref P L0: 57.5% 16.8% 18.2% 7.5%
[libx264 @ 0x6aaf500] ref B L0: 89.8% 8.0% 2.2%
[libx264 @ 0x6aaf500] ref B L1: 96.0% 4.0%
[libx264 @ 0x6aaf500] kb/s:1685.21
END RequestId: 96e1031a-b1a2-4480-a59d-68de487671bd
REPORT RequestId: 96e1031a-b1a2-4480-a59d-68de487671bd Duration: 11721.77 ms Billed Duration: 11722 ms Memory Size: 2048 MB Max Memory Used: 494 MB Init Duration: 353.14 ms
</argument></command></time></target>


I've been struggling with this for a couple days now, any help would be amazing.


-
WebRTC books – a brief review
30 décembre 2013, par silviaI just finished reading Rob Manson’s awesome book “Getting Started with WebRTC” and I can highly recommend it for any Web developer who is interested in WebRTC.
Rob explains very clearly how to create your first video, audio or data peer-connection using WebRTC in current Google Chrome or Firefox (I think it also now applies to Opera, though that wasn’t the case when his book was published). He makes available example code, so you can replicate it in your own Web application easily, including the setup of a signalling server. He also points out that you need a ICE (STUN/TURN) server to punch through firewalls and gives recommendations for what software is available, but stops short of explaining how to set them up.
Rob’s focus is very much on the features required in a typical Web application :
- video calls
- audio calls
- text chats
- file sharing
In fact, he provides the most in-depth demo of how to set up a good file sharing interface I have come across.
Rob then also extends his introduction to WebRTC to two key application areas : education and team communication. His recommendations are spot on and required reading for anyone developing applications in these spaces.
—
Before Rob’s book, I have also read Alan Johnson and Dan Burnett’s “WebRTC” book on APIs and RTCWEB protocols of the HTML5 Real-Time Web.
Alan and Dan’s book was written more than a year ago and explains that state of standardisation at that time. It’s probably a little out-dated now, but it still gives you good foundations on why some decisions were made the way they are and what are contentious issues (some of which still remain). If you really want to understand what happens behind the scenes when you call certain functions in the WebRTC APIs of browsers, then this is for you.
Alan and Dan’s book explains in more details than Rob’s book how IP addresses of communication partners are found, how firewall holepunching works, how sessions get negotiated, and how the standards process works. It’s probably less useful to a Web developer who just wants to implement video call functionality into their Web application, though if something goes wrong you may find yourself digging into the details of SDP, SRTP, DTLS, and other cryptic abbreviations of protocols that all need to work together to get a WebRTC call working.
—
Overall, both books are worthwhile and cover different aspects of WebRTC that you will stumble across if you are directly dealing with WebRTC code.
-
Carrierwave video not being processsed before uploading to S3
12 décembre 2013, par CrampsI'm using Carrierwave, Carrierwave-video and Carrierwave-video-thumbnailer to process videos and make a thumbnail when they're uploaded. This was all working nicely while I was saving the files on my file system. However, now that I've added uploading to Amazon S3 using the carrierwave-aws gem, the videos are being uploaded to S3 without being processed first. It's as if the
process encode_video
andversion :thumb
are being skipped by the uploader.Here's what was working for me at first (before adding S3) :
class VideoUploader < CarrierWave::Uploader::Base
include CarrierWave::Video
include CarrierWave::Video::Thumbnailer
storage :file
def store_dir
"upload/path/"
end
process encode_video: [{ bunch of video options}]
version :thumb do
process thumbnail: [{ bunch of thumbnailer options }]
def full_filename for_file
png_name for_file, version_name
end
end
def png_name for_file, version_name
%Q{#{version_name}_#{for_file.chomp(File.extname(for_file))}.png}
endNow it's really just the same, except it's using
storage :aws
instead.