Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (88)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (9352)

  • FFmpeg transcoding on Lambda results in unusable (static) audio

    17 mai 2020, par jmkmay

    I'd like to move towards serverless for audio transcoding routines in AWS. I've been trying to setup a Lambda function to do just that ; execute a static FFmpeg binary and re-upload the resulting audio file. The static binary I'm using is here.

    



    The Lambda function I'm using in Python looks like this :

    



    import boto3

s3client = boto3.client('s3')
s3resource = boto3.client('s3')

import json
import subprocess 

from io import BytesIO

import os

os.system("cp -ra ./bin/ffmpeg /tmp/")
os.system("chmod -R 775 /tmp")

def lambda_handler(event, context):

    bucketname = event["Records"][0]["s3"]["bucket"]["name"]
    filename = event["Records"][0]["s3"]["object"]["key"]

    audioData = grabFromS3(bucketname, filename)

    with open('/tmp/' + filename, 'wb') as f:
        f.write(audioData.read())

    os.chdir('/tmp/')

    try:
        process = subprocess.check_output(['./ffmpeg -i /tmp/joe_and_bill.wav /tmp/joe_and_bill.aac'], shell=True, stderr=subprocess.STDOUT)
        pushToS3(bucketname, filename)
        return process.decode('utf-8')
    except subprocess.CalledProcessError as e:
        return e.output.decode('utf-8'), os.listdir()


def grabFromS3(bucket, file):

    obj = s3client.get_object(Bucket=bucket, Key=file)
    data = BytesIO(obj['Body'].read())

    return(data)

def pushToS3(bucket, file):

    s3client.upload_file('/tmp/' + file[:-4] + '.aac', bucket, file[:-4] + '.aac')

    return


    



    You can listen to the output of this here. WARNING : Turn your volume down or your ears will bleed.

    



    The original file can be heard here.

    



    Does anyone have any idea what might be causing the encoding errors ? It doesn't seem to be an issue with the file upload, since the md5 on the Lambda fs matches the MD5 of the uploaded file.

    



    I've also tried building the static binary on an Amazon Linux instance in EC2, then zipping and porting it into the Lambda project, but the same issue persists.

    



    I'm stumped ! :(

    


  • How to generate video as fast as possible with subtitles and audio on node.js + ffmpeg ?

    12 septembre 2018, par DSeregin

    Intro :

    We receive from the site some pieces of text
    Pieces arrive to node.js-server

    At the output we need to get a video, merged from all the pieces of text, voiced by the machine voice, with the added subtitles and audio substrate. So that user could be share this video in the social networks. MKV format doesn`t supported by VK.com

    The options that we have tried :
    1. Get all the text at once, generate the entire speech, create a file with subtitles, burn subtitles in the video .mp4 (vk.com does not support the .mkv container). It took 12 seconds of operations for a 45-second video on the local computer.
    2. Generate audio and video files for each piece of text (with added subtitles). It took one second for one piece of text. At the final request, we merge all pieces together. The last request (merging) took 2-3 seconds, which is already bearable.

    The second variant looks acceptable in terms of speed, but if you run 50 clients at the same time, then the computer (tested on a MacBook PRO 2013, 2.4 GHz i7, 8gb 1600 Mhz DDR3, SSD 256gb) processed only 1 piece from 1 client in 60 seconds (60 times slower), then the computer hung tight.

    The commands we used :

    • Burn video subtitles and trim up to conditional 6 seconds (in the code send unix timestamp)

    ffmpeg -i import / back.mov -i export_0 / tmp.srt -scodec mov_text -t 6 export_0 / output.mov

    • Merging all audio

    ffmpeg -i audio1.mp3 .... -i audio15.mp3 merged.mp3

    • Overlay audio-substrate on the text

    ffmpeg -i merged.mp3 -i back.mp3 -filter_complex amerge -ac 2-c: a libmp3lame -q: a 4 -shortest audio.mp3

    • Merging all videos

    ffmpeg -i video.txt -f concat -c copy video.mp4

    • Overlay audio on video

    ffmpeg -i audio.mp3 -i video.mp4 -i test.mp4 -i export / output.mp3 -c: v copy -c: a aac -map 0: v: 0 -map 1: a: 0 -shortest output .mp4

    Questions that torment :

    1. Is it faster ?

    2. Can I use other codecs or methods of gluing without re-encoding ?

    3. Try to call ffmpeg directly without a wrapper ? (in fact, it gives 50-100 ms of speed)

    4. Try not to save to disk, and write data to Stream and have them glue together in the end ?

  • FFMPEG failing in AWS Lambda

    18 février 2019, par Zaid Amir

    I am trying to create a transcoding function for short videos. The function is hosted on AWS Lambda. The problem is that AWS lambda seems to be missing something that FFMPEG requires, at least according to Amazon.

    I contacted Amazon earlier and this is their response to the issue :

    We found that the FFMPEG operations require at least libx264 and an
    acc library, both of which will have dependencies of their own. To
    troubleshoot the issue it will involve diving deeper into the full
    dependency chain. We can see that it works in the Amazon Linux
    environment however, the environment is similar but not identical to
    the lambda environment. There can be some dependencies that exist in
    Amazon Linux but not in lambda environment as Lambda runs on the
    container. Here, as FFmpeg is a third party software, diving deeper
    into the dependency chain and verifying the version compatibilities is
    very hard to do. Unfortunately going further, this is bound to go into
    architecture and code support which is out of AWS Support scope 1. I
    hope you understand our limitations. However should FFmpeg support
    have any questions specific to the Lambda platform, please do let us
    know and we will be happy to assist. We will be in better position to
    investigate further once you receive an update from the FFmpeg support
    suggesting an issue from Lambda end.

    Upon AWS suggestion, I contacted FFMPEG on the developers mailing list, my message was rejected with the reason being that its more suited to ffmpeg users mailing list than developers. I sent an email to ’ffmpeg-user@ffmpeg.org’ a week ago and did not get any response yet.

    I then went and built a dynamically linked ffmpeg version making sure to package all libraries, checked ddl on each one, then made a small lambda function that looped over all binaries and ddled each one of them, compared that to the output I got from Amazon Linux and the same dependencies/versions exists on both lambda and the AWS Linux instance yet ffmpeg still fails on lambda.

    You can find a detailed log file here : https://www.datafilehost.com/d/6e5e21bb

    And this is a sample of the errors I’m getting, repeated across the entire log file :

    2018-08-14T12:27:10.874Z [h264 @ 0x65c2fc0] concealing 2628 DC, 2628
    AC, 2628 MV errors in P frame

    2018-08-14T12:27:10.874Z [aac @ 0x65d2f00] channel element 2.11 is not
    allocated

    2018-08-14T12:27:10.874Z Error while decoding stream #0:1 : Invalid
    data found when processing input

    2018-08-14T12:27:10.874Z [h264 @ 0x67e86c0] Invalid NAL unit size
    (108085662 > 1649).

    2018-08-14T12:27:10.874Z [h264 @ 0x67e86c0] Error splitting the input
    into NAL units.

    2018-08-14T12:27:10.874Z [aac @ 0x65d2f00] channel element 2.0 is not
    allocated

    2018-08-14T12:27:10.874Z Error while decoding stream #0:1 : Invalid
    data found when processing input

    2018-08-14T12:27:10.874Z [h264 @ 0x68189c0] Invalid NAL unit size
    (71106974 > 1085).

    2018-08-14T12:27:10.874Z [h264 @ 0x68189c0] Error splitting the input
    into NAL units.

    2018-08-14T12:27:10.874Z [aac @ 0x65d2f00] Pulse tool not allowed in
    eight short sequence.

    This log is generated when trying to perform an HLS transcoding on this file : https://www.datafilehost.com/d/999a4492

    Note that the issue is not related to that file alone nor is it related to HLS, its general and happen on all videos and any ffmpeg command that tries to seek the stream, even tried extracting a single frame from a video using the simplest form possible for example : ffmpeg -ss 00:00:02 -I file.mp4 -vframes 1 -y output.jpg also fails with the same errors in the log file.

    Not sure how to debug this further. Tried enabling debug logs with ‘-loglevel debug’ but did not give me any extra info. Any help or suggestions