
Recherche avancée
Médias (1)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
Autres articles (93)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Les images
15 mai 2013
Sur d’autres sites (13403)
-
FFMPEG in an AWS Lambda will only output 5 seconds of converted video [duplicate]
5 juin 2021, par beerandsmilesI've been looking for a solution for this issue, but I can't seem to find what's going wrong.


In short, I'm using an AWS Lambda to convert video captured from an raspberry pi in a raw .h264 format to .mp4. The problem is that the output file is always, only 5 seconds long.


So I input a video of say 500mb, that is 10 minutes long, and the output is an mp4 that is exactly the first 5 seconds of the source video.


The lambda has been setup following the tutorial from Amazon that is shown here :
https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/


It is triggered by an upload from one s3 buckets, transcodes, and puts it in a different bucket. The purpose is to store a high quality copy of the video that is smaller to save costs. (this is a personal project, so I'm paying personally)


I've put the full code of the lambda down below.
I had trouble using their recommended stdout method as that resulted in a file being created with a size of 0 bytes.


You'll see a few commented lines where I tried different things to solve it. I thought it best to leave that in while asking the questions so you can see what I've done. Firstly the method of using stdout piped directly into the output S3 did not work, so I stored the output file in lambda's /tmp directory.


However, when I first did this using the signed link as the input it gave me 5 seconds of the input video.


Thinking this had to do with an issue in the stream that FFMPEG was getting, I tried instead to download the file from the first S3 bucket into the temp folder, then convert it, and then upload it.


The actual FFMPEG command is quite simple


f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4"


But this outputs a 5 second video.


I have also tried using different versions of FFMPEG for the layer with lambda and no help. Also, I have set and execution timeout of 2 minutes with 2gb or ram for this lambda.


The last thing, is that running this command on a linux machine, such as a raspberry pi directly, results in an mp4 of the correct length, only in the lambda am I having this problem.


I'm completely lost, and I can't seem to find any documentation on this happening to anyone else.


import os
import subprocess
import shlex
import boto3
from time import sleep

S3_DESTINATION_BUCKET = "dashcam-duncan"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):
 print(event)
 os.chdir('/tmp')
 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + ".mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)
 print(s3_source_signed_url)
 s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
 # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
 ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
 # command1 = shlex.split(ffmpeg_cmd)
 # print(command1)
 os.system(ffmpeg_cmd)
 # os.system('ls')
 # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 file = 'output.mp4'
 resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 s3 = boto3.resource('s3')
 s3.Object(s3_source_bucket,s3_source_key).delete()
 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



The cloudwatch logs on the last execution of this :


built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
Input #0, h264, from 'video00087.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1280x720, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x6aaf500] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x6aaf500] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x6aaf500] 264 - core 161 r3048 b86ae3c - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(progressive), 1280x720, q=2-31, 25 fps, 12800 tbn
Metadata:
encoder : Lavc58.134.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 47 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 56 fps= 44 q=28.0 size= 0kB time=00:00:00.24 bitrate= 1.6kbits/s speed=0.187x 
frame= 65 fps= 35 q=28.0 size= 0kB time=00:00:00.60 bitrate= 0.6kbits/s speed=0.325x 
frame= 74 fps= 31 q=28.0 size= 0kB time=00:00:00.96 bitrate= 0.4kbits/s speed=0.399x 
Enter command: <target>|all <time>|-1 <command>[ <argument>]
Parse error, at least 3 arguments were expected, only 1 given in string 'V����Ҿ�#I���bv��oF��LxE��{��y5Jx�X�-f?2k�E~ہ��L��Y?�w���9?S�?�(q?��y��V8�=)�9'�?�-j?��?�3���Ŧ$��r���\��r}?zb?E��?��B}b4��2��[z�&�逋�Qk�ar�=y���'
frame= 82 fps= 28 q=28.0 size= 256kB time=00:00:01.28 bitrate=1638.6kbits/s speed=0.434x 
frame= 90 fps= 25 q=28.0 size= 256kB time=00:00:01.60 bitrate=1310.9kbits/s speed=0.442x 
frame= 98 fps= 23 q=28.0 size= 256kB time=00:00:01.92 bitrate=1092.4kbits/s speed=0.458x 
frame= 107 fps= 23 q=28.0 size= 256kB time=00:00:02.28 bitrate= 919.9kbits/s speed=0.48x 
frame= 115 fps= 22 q=28.0 size= 512kB time=00:00:02.60 bitrate=1613.3kbits/s speed=0.495x 
frame= 122 fps= 21 q=28.0 size= 512kB time=00:00:02.88 bitrate=1456.4kbits/s speed=0.499x 
[h264 @ 0x6b68c80] left block unavailable for requested intra mode
[h264 @ 0x6b68c80] error while decoding MB 0 19, bytestream 37403
[h264 @ 0x6b68c80] concealing 2129 DC, 2129 AC, 2129 MV errors in P frame
video00087.h264: corrupt decoded frame in stream 0
[h264 @ 0x6ab4080] left block unavailable for requested intra4x4 mode -1
[h264 @ 0x6ab4080] error while decoding MB 0 37, bytestream 13222
[h264 @ 0x6ab4080] concealing 689 DC, 689 AC, 689 MV errors in P frame
video00087.h264: corrupt decoded frame in stream 0
[h264 @ 0x6b68c80] concealing 1347 DC, 1347 AC, 1347 MV errors in P frame
frame= 130 fps= 21 q=28.0 size= 512kB time=00:00:03.20 bitrate=1310.8kbits/s speed=0.509x 
video00087.h264: corrupt decoded frame in stream 0
frame= 131 fps= 15 q=-1.0 Lsize= 1081kB time=00:00:05.12 bitrate=1729.6kbits/s speed=0.575x 
video:1079kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.220914%
[libx264 @ 0x6aaf500] frame I:1 Avg QP:21.61 size: 37761
[libx264 @ 0x6aaf500] frame P:34 Avg QP:22.25 size: 18066
[libx264 @ 0x6aaf500] frame B:96 Avg QP:24.46 size: 4706
[libx264 @ 0x6aaf500] consecutive B-frames: 2.3% 0.0% 0.0% 97.7%
[libx264 @ 0x6aaf500] mb I I16..4: 15.2% 61.2% 23.6%
[libx264 @ 0x6aaf500] mb P I16..4: 8.4% 15.6% 1.2% P16..4: 39.2% 13.7% 6.9% 0.0% 0.0% skip:15.0%
[libx264 @ 0x6aaf500] mb B I16..4: 0.7% 1.8% 0.0% B16..8: 44.5% 4.5% 0.5% direct: 3.6% skip:44.4% L0:46.9% L1:48.0% BI: 5.1%
[libx264 @ 0x6aaf500] 8x8 transform intra:63.5% inter:83.1%
[libx264 @ 0x6aaf500] coded y,uvDC,uvAC intra: 22.1% 25.4% 2.8% inter: 11.6% 19.3% 1.2%
[libx264 @ 0x6aaf500] i16 v,h,dc,p: 4% 63% 8% 25%
[libx264 @ 0x6aaf500] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 9% 26% 53% 1% 2% 1% 3% 1% 3%
[libx264 @ 0x6aaf500] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 44% 16% 4% 4% 3% 5% 4% 4%
[libx264 @ 0x6aaf500] i8c dc,h,v,p: 66% 24% 9% 1%
[libx264 @ 0x6aaf500] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x6aaf500] ref P L0: 57.5% 16.8% 18.2% 7.5%
[libx264 @ 0x6aaf500] ref B L0: 89.8% 8.0% 2.2%
[libx264 @ 0x6aaf500] ref B L1: 96.0% 4.0%
[libx264 @ 0x6aaf500] kb/s:1685.21
END RequestId: 96e1031a-b1a2-4480-a59d-68de487671bd
REPORT RequestId: 96e1031a-b1a2-4480-a59d-68de487671bd Duration: 11721.77 ms Billed Duration: 11722 ms Memory Size: 2048 MB Max Memory Used: 494 MB Init Duration: 353.14 ms
</argument></command></time></target>


I've been struggling with this for a couple days now, any help would be amazing.


-
What is the best way to merge .mkv and .mka files using ffmpeg ?
28 juin 2017, par RobertI’m using ffmpeg to merge .mkv and .mka files into .mp4 files. My current command looks like this :
ffmpeg -i video.mkv -i audio.mka output_path.mp4
The audio and video files are pre-signed urls from Amazon S3. Even on a server with sufficient resources, this process is going very slowly. I’ve researched situations where you can tell ffmpeg to skip re-encoding each frame, but I think that in my situation it actually does need to re-encode each frame.
I’ve downloaded 2 sample files to my macbook pro and have installed ffmpeg locally via homebrew. When I run the command
ffmpeg -i video.mkv -i audio.mka -c copy output.mp4
I get the following output :
ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.1.0 (clang-802.0.42)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, matroska,webm, from '319_audio_1498590673766.mka':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.53, start: 2.831000, bitrate: 50 kb/s
Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
title : Audio
Input #1, matroska,webm, from '319_video_1498590673766.mkv':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.97, start: 2.851000, bitrate: 224 kb/s
Stream #1:0(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
title : Video
[mp4 @ 0x7fa4f0806800] Could not find tag for codec vp8 in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #1:0 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Last message repeated 1 timesSo it appears that the specific encodings I’m working with are vp8 videos and opus audio files, which I believe are incompatible with the .mp4 output container. I would appreciate answers that cover ways of optimally merging vp8 and opus into .mp4 output or answers that point me in the direction of output media formats that are both compatible with vp8 & opus and are playable on web and mobile devices so that I can bypass the re-encoding step altogether.
EDIT :
Just wanted to provide a benchmark after following LordNeckbeard’s advice :
4 min 41 second video transcoded locally on my mac
LordNeckbeard’s approach : 15 mins 55 seconds (955 seconds)
Current approach : 18 mins 49 seconds (1129 seconds)
18% speed increase -
AWS EC2 and OpenCV with UDP consumer
22 juillet 2021, par NoobZikI have two EC2


- 

- A That grabs the video feed from Amazon Kinesis Video to send it into via UDP to B
- B the consumer which it will grab the UDP feed from the A






While debugging my opencv not grabbing the UDP feed, I suspect something is wrong with ffpmeg, so I am going to make sure that ffplay can read packets sent from A.


Now when I try to read udp packets with
ffplay {adress-ip-with-port}
I have the following error :

Could not initialize SDL - No available video device
(Did you set the DISPLAY variable?)



How do I fix this one since there is no display on EC2.


Also if it can help, this is my code for the consumer part


Create your views here.
from django.http import HttpResponse
from django.template import loader
from django.shortcuts import render
# from .models import Vehicule
import cv2
import threading
from django.views.decorators import gzip
from django.http import StreamingHttpResponse


@gzip.gzip_page
def webcam(request):
 try:
 cam = VideoCamera()
 return StreamingHttpResponse(gen(cam), content_type="multipart/x-mixed-replace;boundary=frame")
 except:
 pass
 return render(request, 'vebcam.html')

#capture video
class VideoCamera(object):
 def __init__(self):
 self.video = cv2.VideoCapture('udp://172.31.57.243:55055', cv2.CAP_FFMPEG)
 (self.grabbed, self.frame) = self.video.read()
 threading.Thread(target=self.update, args=()).start()

 def __del__(self):
 self.video.release()

 def get_frame(self):
 image = self.frame
 _, jpeg = cv2.imencode('.jpg', image)
 return jpeg.tobytes()

 def update(self):
 while True:
 (self.grabbed, self.frame) = self.video.read()

def gen(camera):
 while True:
 frame = camera.get_frame()
 yield (b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')



It runs fine on local but not on EC2


This is the log that led me to check ffplay


Traceback (most recent call last):
 File "/usr/lib/python3.8/wsgiref/handlers.py", line 138, in run
 self.finish_response()
 File "/usr/lib/python3.8/wsgiref/handlers.py", line 183, in finish_response
 for data in self.result:
 File "/home/ubuntu/.local/lib/python3.8/site-packages/django/utils/text.py", line 304, in compress_sequence
 for item in sequence:
 File "/home/ubuntu/site/mysite/mysite/views.py", line 42, in gen
 frame = camera.get_frame()
 File "/home/ubuntu/site/mysite/mysite/views.py", line 33, in get_frame
 _, jpeg = cv2.imencode('.jpg', image)
cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-xw6jtoah/opencv/modules/imgcodecs/src/loadsave.cpp:978: error: (-215:Assertion failed) !image.empty() in function 'imencode'