
Advanced search
Other articles (62)
-
Le profil des utilisateurs
12 April 2011, byChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
HTML5 audio and video support
13 April 2011, byMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Configurer la prise en compte des langues
15 November 2010, byAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
On other websites (4810)
-
Ream-time watermarking with MPEG-DASH
14 July 2016, by Calvin W.In the system, I want to add a unique watermark (e.g. IP address of client and time stamp) into the video that he/she want to watch.
But when I handled it with OpenCV, it spent 25 minute with a 15-min video. And I need to transcode to mp4 with ffmpeg.
Now I’m trying the watermark function of ffmpeg, bit it still needs some time.
It it possible to send the video to client side with MPEG-DASH while transcoding it with ffmpeg?
System spec:(Amazon EC2 c3.xlarge)
Intel Xeon E5-2680 v2 (Ivy Bridge) - 4 vCPU
7.5G RAM
40GB SSD
Ubuntu 14.04 LTS
OpenCV2.4.13
ffmpeg 3.1.1code:
import cv2
import sys
import time
from datetime import datetime as dt
# frame of input video
fps = float(sys.argv[4])
# encode to AVC
fourcc = cv2.cv.CV_FOURCC('A', 'V', 'C', '1')
# transparency of text
alpha = 0.1
beta = 1 - alpha
# input video
cap = cv2.VideoCapture(sys.argv[3])
# current frame index, start from 0
frameIndex = 0
# get input video's width/height
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
# config output (error using .mp4)
out = cv2.VideoWriter('output.avi', fourcc, fps, (width, height))
# access time
timeStr = dt.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
requestIP = sys.argv[1]
username = sys.argv[2]
text = "%s %s %s" % (requestIP, username, timeStr)
# start loading video
while(cap.isOpened()):
ret, frame = cap.read()
if ret:
# add text between 10s - 20s
if frameIndex > time10 and frameIndex < time20:
# clone a new frame to add text
overlay = frame.copy()
cv2.putText(overlay, text, (100, 100), cv2.FONT_HERSHEY_PLAIN, 0.5, (255, 255, 255))
# combine both frame and make text transparent
cv2.addWeighted(overlay, alpha, frame, beta, 0, frame)
# write frame to output
out.write(frame)
frameIndex += 1
# wait for next frame
if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
break
# End of video
# release
cap.release()
out.release() -
Encoding audio_common messages to OPUS
14 June 2023, by djangbahevans

I am trying to stream microphone and camera data to Amazon KVS WebRTC. I'm able to make video work using this package (adapted for noetic) however I am struggling to make audio work. I'm using the audio_capture package to get mp3 frames. I'm trying to convert this to OPUS frames before streaming to KVS, but I'm unsure how to do this. I wrote this bit of code based on the small resources I can find on using ffmpeg, but it's not working.
avcodec_fill_audio_frame
is returning -22.

#include "opus_encoder.h"

OPUSEncoder::OPUSEncoder() {
 av_register_all();
 codecContext == nullptr;
}

OPUSEncoder::~OPUSEncoder() {
 if (codecContext != nullptr) {
 avcodec_free_context(&codecContext);
 }
}

int OPUSEncoder::Initialize(int Fs, int channels) {
 AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_OPUS);
 if (!codec) {
 printf("Codec not found\n");
 return -1;
 }

 codecContext = avcodec_alloc_context3(codec);
 if (!codecContext) {
 printf("Could not allocate audio codec context\n");
 return -1;
 }

 codecContext->sample_fmt = AV_SAMPLE_FMT_S16;
 codecContext->bit_rate = 128000;
 codecContext->sample_rate = Fs;
 codecContext->channel_layout = av_get_default_channel_layout(channels);
 codecContext->channels = channels;

 if (avcodec_open2(codecContext, codec, nullptr) < 0) {
 printf("Could not open codec\n");
 return -1;
 }

 return 0;
}

int OPUSEncoder::Encode(const uint8_t *audio_data, int frameSize,
 uint8_t *out) {
 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;

 AVFrame *frame = av_frame_alloc();
 frame->nb_samples = frameSize;
 frame->format = codecContext->sample_fmt;
 frame->channel_layout = codecContext->channel_layout;

 int ret = avcodec_fill_audio_frame(frame, codecContext->channels,
 codecContext->sample_fmt, audio_data,
 frameSize * 2, 0);
 if (ret < 0) {
 printf("Error filling audio frame: %d\n", ret);
 return -1;
 }

 ret = avcodec_send_frame(codecContext, frame);
 if (ret < 0) {
 printf("Error sending the frame to the encoder\n");
 return -1;
 }

 while (ret >= 0) {
 ret = avcodec_receive_packet(codecContext, &pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 return 0;
 } else if (ret < 0) {
 printf("Error encoding audio frame\n");
 return -1;
 }

 memcpy(out, pkt.data, pkt.size);
 out += pkt.size;
 av_packet_unref(&pkt);
 }

 av_frame_free(&frame);

 return 0;
}



-
ffmpeg file conversion AWS Lambda
10 April 2021, by eartoolboxI want a .webm file to be converted to a .wav file after it hits my S3 bucket. I followed this tutorial and tried to adapt it from my use case using the .webm -> .wav ffmpeg command described here.


My AWS Lambda function generally works, in that when my .webm file hits the source bucket, it is converted to .wav and ends up in the destination bucket. However, the resulting file .wav is always 0 bytes (though the .webm not, including the appropriate audio). Did I adapt the code wrong? I only changed the ffmpeg_cmd line from the first link.


import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "hmtm-out"
SIGNED_URL_TIMEOUT = 60

def lambda_handler(event, context):

 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + ".wav"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)
 
 ffmpeg_cmd = "/opt/bin/ffmpeg -i \"" + s3_source_signed_url + "\" -c:a pcm_f32le " + s3_destination_filename + " -"
 
 
 command1 = shlex.split(ffmpeg_cmd)
 p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

 resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)

 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }