Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (52)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (11222)

  • Ream-time watermarking with MPEG-DASH

    14 juillet 2016, par Calvin W.

    In the system, I want to add a unique watermark (e.g. IP address of client and time stamp) into the video that he/she want to watch.

    But when I handled it with OpenCV, it spent 25 minute with a 15-min video. And I need to transcode to mp4 with ffmpeg.

    Now I’m trying the watermark function of ffmpeg, bit it still needs some time.

    It it possible to send the video to client side with MPEG-DASH while transcoding it with ffmpeg ?

    System spec :(Amazon EC2 c3.xlarge)
    Intel Xeon E5-2680 v2 (Ivy Bridge) - 4 vCPU
    7.5G RAM
    40GB SSD
    Ubuntu 14.04 LTS
    OpenCV2.4.13
    ffmpeg 3.1.1

    code :

    import cv2
    import sys
    import time
    from datetime import datetime as dt

    # frame of input video
    fps = float(sys.argv[4])
    # encode to AVC
    fourcc = cv2.cv.CV_FOURCC('A', 'V', 'C', '1')
    # transparency of text
    alpha = 0.1
    beta = 1 - alpha

    # input video
    cap = cv2.VideoCapture(sys.argv[3])

    # current frame index, start from 0
    frameIndex = 0

    # get input video's width/height
    width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))

    # config output (error using .mp4)
    out = cv2.VideoWriter('output.avi', fourcc, fps, (width, height))

    # access time
    timeStr = dt.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')

    requestIP = sys.argv[1]
    username = sys.argv[2]
    text = "%s %s %s" % (requestIP, username, timeStr)


    # start loading video
    while(cap.isOpened()):
       ret, frame = cap.read()
       if ret:
           # add text between 10s - 20s
           if frameIndex > time10 and frameIndex < time20:
               # clone a new frame to add text
               overlay = frame.copy()
               cv2.putText(overlay, text, (100, 100), cv2.FONT_HERSHEY_PLAIN, 0.5, (255, 255, 255))
               # combine both frame and make text transparent
               cv2.addWeighted(overlay, alpha, frame, beta, 0, frame)
           # write frame to output
           out.write(frame)
           frameIndex += 1
       # wait for next frame
       if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
           break
    # End of video
    # release
    cap.release()
    out.release()
  • Encoding audio_common messages to OPUS

    14 juin 2023, par djangbahevans

    


    I am trying to stream microphone and camera data to Amazon KVS WebRTC. I'm able to make video work using this package (adapted for noetic) however I am struggling to make audio work. I'm using the audio_capture package to get mp3 frames. I'm trying to convert this to OPUS frames before streaming to KVS, but I'm unsure how to do this. I wrote this bit of code based on the small resources I can find on using ffmpeg, but it's not working. avcodec_fill_audio_frame is returning -22.

    


    #include "opus_encoder.h"

OPUSEncoder::OPUSEncoder() {
  av_register_all();
  codecContext == nullptr;
}

OPUSEncoder::~OPUSEncoder() {
  if (codecContext != nullptr) {
    avcodec_free_context(&codecContext);
  }
}

int OPUSEncoder::Initialize(int Fs, int channels) {
  AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_OPUS);
  if (!codec) {
    printf("Codec not found\n");
    return -1;
  }

  codecContext = avcodec_alloc_context3(codec);
  if (!codecContext) {
    printf("Could not allocate audio codec context\n");
    return -1;
  }

  codecContext->sample_fmt = AV_SAMPLE_FMT_S16;
  codecContext->bit_rate = 128000;
  codecContext->sample_rate = Fs;
  codecContext->channel_layout = av_get_default_channel_layout(channels);
  codecContext->channels = channels;

  if (avcodec_open2(codecContext, codec, nullptr) < 0) {
    printf("Could not open codec\n");
    return -1;
  }

  return 0;
}

int OPUSEncoder::Encode(const uint8_t *audio_data, int frameSize,
                        uint8_t *out) {
  AVPacket pkt;
  av_init_packet(&pkt);
  pkt.data = nullptr;
  pkt.size = 0;

  AVFrame *frame = av_frame_alloc();
  frame->nb_samples = frameSize;
  frame->format = codecContext->sample_fmt;
  frame->channel_layout = codecContext->channel_layout;

  int ret = avcodec_fill_audio_frame(frame, codecContext->channels,
                                     codecContext->sample_fmt, audio_data,
                                     frameSize * 2, 0);
  if (ret < 0) {
    printf("Error filling audio frame: %d\n", ret);
    return -1;
  }

  ret = avcodec_send_frame(codecContext, frame);
  if (ret < 0) {
    printf("Error sending the frame to the encoder\n");
    return -1;
  }

  while (ret >= 0) {
    ret = avcodec_receive_packet(codecContext, &pkt);
    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
      return 0;
    } else if (ret < 0) {
      printf("Error encoding audio frame\n");
      return -1;
    }

    memcpy(out, pkt.data, pkt.size);
    out += pkt.size;
    av_packet_unref(&pkt);
  }

  av_frame_free(&frame);

  return 0;
}


    


  • ffmpeg file conversion AWS Lambda

    10 avril 2021, par eartoolbox

    I want a .webm file to be converted to a .wav file after it hits my S3 bucket. I followed this tutorial and tried to adapt it from my use case using the .webm -> .wav ffmpeg command described here.

    


    My AWS Lambda function generally works, in that when my .webm file hits the source bucket, it is converted to .wav and ends up in the destination bucket. However, the resulting file .wav is always 0 bytes (though the .webm not, including the appropriate audio). Did I adapt the code wrong ? I only changed the ffmpeg_cmd line from the first link.

    


    import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "hmtm-out"
SIGNED_URL_TIMEOUT = 60

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + ".wav"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)
    
    ffmpeg_cmd = "/opt/bin/ffmpeg -i \"" + s3_source_signed_url + "\" -c:a pcm_f32le " + s3_destination_filename + " -"
    
    
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }