
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (64)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (12585)
-
Reading live output from FFMPEG using PHP
29 septembre 2018, par user561787the problem i’m dealing with is getting the shell output from a ffmpeg command while it’s being executed and writing it in an html page using php.
After some research i found a very similar request here :Update page content from live PHP and Python output using Ajax, which seemed to be perfect, but it’s not working at all.
The basic idea is to use an AJAX request to invoke the PHP script, which should execute the command and echo the live read content from the process, taking care to use this.readyState==3 (otherwise the JS script would receive the response only upon completion)
For the PHP section i tried using the code in the answer above, (obviously adapted to my needs) :
function liveExecuteCommand($command){
while (@ ob_end_flush()); // end all output buffers if any
$proc = popen($command, 'r');
$live_output = "";
$complete_output = "";
while (!feof($proc))
{
$live_output = fread($proc, 4096);
$complete_output = $complete_output . $live_output;
echo "<pre>$live_output</pre>";
@ flush();
}
pclose($proc);
}And for the AJAX section i used
function getLiveStream(){
var ajax = new XMLHttpRequest();
ajax.onreadystatechange = function() {
if (this.readyState == 3) {
document.getElementById("result").innerHTML = this.responseText;
}
};
var url = 'process/getlive';
ajax.open('GET', url,true);
ajax.send();
}Which sadly doesn’t work.
The command being executed is this :
'ffmpeg.exe -i "C:/Users/BACKUP/Desktop/xampp/htdocs/testarea/test.mp4" -map 0:0 -map 0:1 -c:v libx264 -preset fast -crf 26 -c:a libmp3lame -ar 24000 -q:a 5 "C:\Users\BACKUP\Desktop\xampp\htdocs\testarea\output/test.mkv"'
, which i tested and it works.When i run the html page and the ajax script within, the ffmpeg command doesn’t even run, as i checked in task managet. It simply returns a blank text.
When i run the php script by itself, the command runs, the file is converted but it doesn’t echo anything at all.
After some more research i also found this page, which seems to be made for this exact purpose : https://github.com/4poc/php-live-transcode/blob/master/stream.php
The relevant section is at the end, the code before is for dealing with options specific to ffmpeg. But it didn’t work either, with the same exact outcomes.
Now i’m considering simply writing the output to a file and reading it from it dinamically, but i’d really like to know the reason why they both don’t work for me.
EDIT : PHP Execute shell command asynchronously and retrieve live output answers to how to get content from a temporary file that is being written, not directly from the process.
-
Blackmagic SDK - OpenGL Letterbox and FFMPEG
18 mars 2015, par Marco Reisacherso I left my Idea using the DirectShow filters due to lack of Support of my needed Video formats.
The native API uses OpenGL which i am a total beginner to.
I stumbled upon the following problems :-
How to automatically apply a letterbox or pillarbox depending on the width and height of the Frame that gets passed to OpenGL (I’m using bpctrlanchormap.h to autosize everything and i get squeezed/stretched images)
-
How to record a Video of the OpenGL stream (I looked around and saw that ffmpeg should be able to do so, but I can’t get it running.) Also what would be nice is Audio recording into the same file of a microphone
I’m using the Blackmagic "Capture Preview" sample.
This is the sourcecode that initialises the OpenGL renderer and passes the Frames
#include "stdafx.h"
#include <gl></gl>gl.h>
#include "PreviewWindow.h"
PreviewWindow::PreviewWindow()
: m_deckLinkScreenPreviewHelper(NULL), m_refCount(1), m_previewBox(NULL), m_previewBoxDC(NULL), m_openGLctx(NULL)
{}
PreviewWindow::~PreviewWindow()
{
if (m_deckLinkScreenPreviewHelper != NULL)
{
m_deckLinkScreenPreviewHelper->Release();
m_deckLinkScreenPreviewHelper = NULL;
}
if (m_openGLctx != NULL)
{
wglDeleteContext(m_openGLctx);
m_openGLctx = NULL;
}
if (m_previewBoxDC != NULL)
{
m_previewBox->ReleaseDC(m_previewBoxDC);
m_previewBoxDC = NULL;
}
}
bool PreviewWindow::init(CStatic *previewBox)
{
m_previewBox = previewBox;
// Create the DeckLink screen preview helper
if (CoCreateInstance(CLSID_CDeckLinkGLScreenPreviewHelper, NULL, CLSCTX_ALL, IID_IDeckLinkGLScreenPreviewHelper (void**)&m_deckLinkScreenPreviewHelper) != S_OK)
return false;
// Initialise OpenGL
return initOpenGL();
}
bool PreviewWindow::initOpenGL()
{
PIXELFORMATDESCRIPTOR pixelFormatDesc;
int pixelFormat;
bool result = false;
//
// Here, we create an OpenGL context attached to the screen preview box
// so we can use it later on when we need to draw preview frames.
// Get the preview box drawing context
m_previewBoxDC = m_previewBox->GetDC();
if (m_previewBoxDC == NULL)
return false;
// Ensure the preview box DC uses ARGB pixel format
ZeroMemory(&pixelFormatDesc, sizeof(pixelFormatDesc));
pixelFormatDesc.nSize = sizeof(pixelFormatDesc);
pixelFormatDesc.nVersion = 1;
pixelFormatDesc.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pixelFormatDesc.iPixelType = PFD_TYPE_RGBA;
pixelFormatDesc.cColorBits = 32;
pixelFormatDesc.cDepthBits = 16;
pixelFormatDesc.cAlphaBits = 8;
pixelFormatDesc.iLayerType = PFD_MAIN_PLANE;
pixelFormat = ChoosePixelFormat(m_previewBoxDC->m_hDC, &pixelFormatDesc);
if (SetPixelFormat(m_previewBoxDC->m_hDC, pixelFormat, &pixelFormatDesc) == false)
return false;
// Create OpenGL rendering context
m_openGLctx = wglCreateContext(m_previewBoxDC->m_hDC);
if (m_openGLctx == NULL)
return false;
// Make the new OpenGL context the current rendering context so
// we can initialise the DeckLink preview helper
if (wglMakeCurrent(m_previewBoxDC->m_hDC, m_openGLctx) == FALSE)
return false;
if (m_deckLinkScreenPreviewHelper->InitializeGL() == S_OK)
result = true;
// Reset the OpenGL rendering context
wglMakeCurrent(NULL, NULL);
return result;
}
HRESULT PreviewWindow::DrawFrame(IDeckLinkVideoFrame* theFrame)
{
// Make sure we are initialised
if ((m_deckLinkScreenPreviewHelper == NULL) || (m_previewBoxDC == NULL) || (m_openGLctx == NULL))
return E_FAIL;
// First, pass the frame to the DeckLink screen preview helper
m_deckLinkScreenPreviewHelper->SetFrame(theFrame);
// Then set the OpenGL rendering context to the one we created before
wglMakeCurrent(m_previewBoxDC->m_hDC, m_openGLctx);
// and let the helper take care of the drawing
m_deckLinkScreenPreviewHelper->PaintGL();
// Last, reset the OpenGL rendering context
wglMakeCurrent(NULL, NULL);
return S_OK;
} -
-
How AVCodecContext bitrate, framerate and timebase is used when encoding single frame
28 mars 2023, par CyrusI am trying to learn FFmpeg from examples as there is a tight schedule. The task is to encode a raw YUV image into JPEG format of the given width and height. I have found examples from ffmpeg official website, which turns out to be quite straight-forward. However there are some fields in AVCodecContext that I thought only makes sense when encoding videos(e.g. bitrate, framerate, timebase, gopsize, max_b_frames etc).


I understand on a high level what those values are when it comes to videos, but do I need to care about those when I just want a single image ? Currently for testing, I am just setting them as dummy values and it seems to work. But I want to make sure that I am not making terrible assumptions that will break in the long run.


EDIT :


Here is the code I got. Most of them are copy and paste from examples, with some changes to replace old APIs with newer ones.


#include "thumbnail.h"
#include "libavcodec/avcodec.h"
#include "libavutil/imgutils.h"
#include 
#include 
#include 

void print_averror(int error_code) {
 char err_msg[100] = {0};
 av_strerror(error_code, err_msg, 100);
 printf("Reason: %s\n", err_msg);
}

ffmpeg_status_t save_yuv_as_jpeg(uint8_t* source_buffer, char* output_thumbnail_filename, int thumbnail_width, int thumbnail_height) {
 const AVCodec* mjpeg_codec = avcodec_find_encoder(AV_CODEC_ID_MJPEG);
 if (!mjpeg_codec) {
 printf("Codec for mjpeg cannot be found.\n");
 return FFMPEG_THUMBNAIL_CODEC_NOT_FOUND;
 }

 AVCodecContext* codec_ctx = avcodec_alloc_context3(mjpeg_codec);
 if (!codec_ctx) {
 printf("Codec context cannot be allocated for the given mjpeg codec.\n");
 return FFMPEG_THUMBNAIL_ALLOC_CONTEXT_FAILED;
 }

 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 printf("Thumbnail packet cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_PACKET_FAILED;
 }

 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 printf("Thumbnail frame cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_FRAME_FAILED;
 }

 // The part that I don't understand
 codec_ctx->bit_rate = 400000;
 codec_ctx->width = thumbnail_width;
 codec_ctx->height = thumbnail_height;
 codec_ctx->time_base = (AVRational){1, 25};
 codec_ctx->framerate = (AVRational){1, 25};

 codec_ctx->gop_size = 10;
 codec_ctx->max_b_frames = 1;
 codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
 int ret = av_image_fill_arrays(frame->data, frame->linesize, source_buffer, AV_PIX_FMT_YUV420P, thumbnail_width, thumbnail_height, 32);
 if (ret < 0) {
 print_averror(ret);
 printf("Pixel format: yuv420p, width: %d, height: %d\n", thumbnail_width, thumbnail_height);
 return FFMPEG_THUMBNAIL_FILL_FRAME_DATA_FAILED;
 }

 ret = avcodec_send_frame(codec_ctx, frame);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to send frame to encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to receive packet from encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 // store the thumbnail in output
 int fd = open(output_thumbnail_filename, O_CREAT | O_RDWR);
 write(fd, pkt->data, pkt->size);
 close(fd);

 // freeing allocated structs
 avcodec_free_context(&codec_ctx);
 av_frame_free(&frame);
 av_packet_free(&pkt);
 return FFMPEG_SUCCESS;
}