Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (78)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (13106)

  • Encoding audio_common messages to OPUS

    14 juin 2023, par djangbahevans

    


    I am trying to stream microphone and camera data to Amazon KVS WebRTC. I'm able to make video work using this package (adapted for noetic) however I am struggling to make audio work. I'm using the audio_capture package to get mp3 frames. I'm trying to convert this to OPUS frames before streaming to KVS, but I'm unsure how to do this. I wrote this bit of code based on the small resources I can find on using ffmpeg, but it's not working. avcodec_fill_audio_frame is returning -22.

    


    #include "opus_encoder.h"

OPUSEncoder::OPUSEncoder() {
  av_register_all();
  codecContext == nullptr;
}

OPUSEncoder::~OPUSEncoder() {
  if (codecContext != nullptr) {
    avcodec_free_context(&codecContext);
  }
}

int OPUSEncoder::Initialize(int Fs, int channels) {
  AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_OPUS);
  if (!codec) {
    printf("Codec not found\n");
    return -1;
  }

  codecContext = avcodec_alloc_context3(codec);
  if (!codecContext) {
    printf("Could not allocate audio codec context\n");
    return -1;
  }

  codecContext->sample_fmt = AV_SAMPLE_FMT_S16;
  codecContext->bit_rate = 128000;
  codecContext->sample_rate = Fs;
  codecContext->channel_layout = av_get_default_channel_layout(channels);
  codecContext->channels = channels;

  if (avcodec_open2(codecContext, codec, nullptr) < 0) {
    printf("Could not open codec\n");
    return -1;
  }

  return 0;
}

int OPUSEncoder::Encode(const uint8_t *audio_data, int frameSize,
                        uint8_t *out) {
  AVPacket pkt;
  av_init_packet(&pkt);
  pkt.data = nullptr;
  pkt.size = 0;

  AVFrame *frame = av_frame_alloc();
  frame->nb_samples = frameSize;
  frame->format = codecContext->sample_fmt;
  frame->channel_layout = codecContext->channel_layout;

  int ret = avcodec_fill_audio_frame(frame, codecContext->channels,
                                     codecContext->sample_fmt, audio_data,
                                     frameSize * 2, 0);
  if (ret < 0) {
    printf("Error filling audio frame: %d\n", ret);
    return -1;
  }

  ret = avcodec_send_frame(codecContext, frame);
  if (ret < 0) {
    printf("Error sending the frame to the encoder\n");
    return -1;
  }

  while (ret >= 0) {
    ret = avcodec_receive_packet(codecContext, &pkt);
    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
      return 0;
    } else if (ret < 0) {
      printf("Error encoding audio frame\n");
      return -1;
    }

    memcpy(out, pkt.data, pkt.size);
    out += pkt.size;
    av_packet_unref(&pkt);
  }

  av_frame_free(&frame);

  return 0;
}


    


  • ffmpeg file conversion AWS Lambda

    10 avril 2021, par eartoolbox

    I want a .webm file to be converted to a .wav file after it hits my S3 bucket. I followed this tutorial and tried to adapt it from my use case using the .webm -> .wav ffmpeg command described here.

    


    My AWS Lambda function generally works, in that when my .webm file hits the source bucket, it is converted to .wav and ends up in the destination bucket. However, the resulting file .wav is always 0 bytes (though the .webm not, including the appropriate audio). Did I adapt the code wrong ? I only changed the ffmpeg_cmd line from the first link.

    


    import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "hmtm-out"
SIGNED_URL_TIMEOUT = 60

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + ".wav"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)
    
    ffmpeg_cmd = "/opt/bin/ffmpeg -i \"" + s3_source_signed_url + "\" -c:a pcm_f32le " + s3_destination_filename + " -"
    
    
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }
 


    


  • ffmpeg file conversion AWS Lamda

    10 avril 2021, par eartoolbox

    I want a .webm file to be converted to a .wav file after it hits my S3 bucket. I followed this tutorial and tried to adapt it from my use case using the .webm -> .wav ffmpeg command described here.

    


    My AWS Lambda function generally works, in that when my .webm file hits the source bucket, it is converted to .wav and ends up in the destination bucket. However, the resulting file .wav is always 0 bytes (though the .webm not, including the appropriate audio). Did I adapt the code wrong ? I only changed the ffmpeg_cmd line from the first link.

    


    import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "hmtm-out"
SIGNED_URL_TIMEOUT = 60

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + ".wav"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)
    
    ffmpeg_cmd = "/opt/bin/ffmpeg -i \"" + s3_source_signed_url + "\" -c:a pcm_f32le " + s3_destination_filename + " -"
    
    
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }