
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (82)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (5882)
-
Reading live output from FFMPEG using PHP
29 septembre 2018, par user561787the problem i’m dealing with is getting the shell output from a ffmpeg command while it’s being executed and writing it in an html page using php.
After some research i found a very similar request here :Update page content from live PHP and Python output using Ajax, which seemed to be perfect, but it’s not working at all.
The basic idea is to use an AJAX request to invoke the PHP script, which should execute the command and echo the live read content from the process, taking care to use this.readyState==3 (otherwise the JS script would receive the response only upon completion)
For the PHP section i tried using the code in the answer above, (obviously adapted to my needs) :
function liveExecuteCommand($command){
while (@ ob_end_flush()); // end all output buffers if any
$proc = popen($command, 'r');
$live_output = "";
$complete_output = "";
while (!feof($proc))
{
$live_output = fread($proc, 4096);
$complete_output = $complete_output . $live_output;
echo "<pre>$live_output</pre>";
@ flush();
}
pclose($proc);
}And for the AJAX section i used
function getLiveStream(){
var ajax = new XMLHttpRequest();
ajax.onreadystatechange = function() {
if (this.readyState == 3) {
document.getElementById("result").innerHTML = this.responseText;
}
};
var url = 'process/getlive';
ajax.open('GET', url,true);
ajax.send();
}Which sadly doesn’t work.
The command being executed is this :
'ffmpeg.exe -i "C:/Users/BACKUP/Desktop/xampp/htdocs/testarea/test.mp4" -map 0:0 -map 0:1 -c:v libx264 -preset fast -crf 26 -c:a libmp3lame -ar 24000 -q:a 5 "C:\Users\BACKUP\Desktop\xampp\htdocs\testarea\output/test.mkv"'
, which i tested and it works.When i run the html page and the ajax script within, the ffmpeg command doesn’t even run, as i checked in task managet. It simply returns a blank text.
When i run the php script by itself, the command runs, the file is converted but it doesn’t echo anything at all.
After some more research i also found this page, which seems to be made for this exact purpose : https://github.com/4poc/php-live-transcode/blob/master/stream.php
The relevant section is at the end, the code before is for dealing with options specific to ffmpeg. But it didn’t work either, with the same exact outcomes.
Now i’m considering simply writing the output to a file and reading it from it dinamically, but i’d really like to know the reason why they both don’t work for me.
EDIT : PHP Execute shell command asynchronously and retrieve live output answers to how to get content from a temporary file that is being written, not directly from the process.
-
Grand Unified Theory of Compact Disc
1er février 2013, par Multimedia Mike — GeneralThis is something I started writing about a decade ago (and I almost certainly have some of it wrong), back when compact discs still had a fair amount of relevance. Back around 2002, after a few years investigating multimedia technology, I took an interest in compact discs of all sorts. Even though there may seem to be a wide range of CD types, I generally found that they’re all fundamentally the same. I thought I would finally publishing something, incomplete though it may be.
Physical Perspective
There are a lot of ways to look at a compact disc. First, there’s the physical format, where a laser detects where pits/grooves have disturbed the smooth surface (a.k.a. lands). A lot of technical descriptions claim that these lands and pits on a CD correspond to ones and zeros. That’s not actually true, but you have to decide what level of abstraction you care about, and that abstraction is good enough if you only care about the discs from a software perspective.Grand Unified Theory (Software Perspective)
Looking at a disc from a software perspective, I have generally found it useful to view a CD as a combination of a 2 main components :- table of contents (TOC)
- a long string of sectors, each of which is 2352 bytes long
I like to believe that’s pretty much all there is to it. All of the information on a CD is stored as a string of sectors that might be chopped up into a series of anywhere from 1-99 individual tracks. The exact sector locations where these individual tracks begin are defined in the TOC.
Audio CDs (CD-DA / Red Book)
The initial purpose for the compact disc was to store digital audio. The strange sector size of 2352 bytes is an artifact of this original charter. “CD quality audio”, as any multimedia nerd knows, is formally defined as stereo PCM samples that are each 16 bits wide and played at a frequency of 44100 Hz.
(44100 audio frames / 1 second) * (2 samples / audio frame) * (16 bits / 1 sample) * (1 byte / 8 bits) = 176,400 bytes / second (176,400 bytes / 1 second) / (2352 bytes / 1 sector) = 75
75 is the number of sectors required to store a single second of CD-quality audio. A single sector stores 1/75th of a second, or a ‘frame’ of audio (though I think ‘frame’ gets tossed around at all levels when describing CD formats).
The term “red book” is thrown around in relation to audio CDs. There is a series of rainbow books that define various optical disc standards and the red book describes audio CDs.
Basic Data CD-ROMs (Mode 1 / Yellow Book)
Somewhere along the line, someone decided that general digital information could be stored on these discs. Hence, the CD-ROM was born. The standard model above still applies– TOC and string of 2352-byte sectors. However, it’s generally only useful to have a single track on a CD-ROM. Thus, the TOC only lists a single track. That single track can easily span the entire disc (something that would be unusual for a typical audio CD).While the model is mostly the same, the most notable difference between and audio CD and a plain CD-ROM is that, while each sector is 2352 bytes long, only 2048 bytes are used to store actual data payload. The remaining bytes are used for synchronization and additional error detection/correction.
At least, the foregoing is true for mode 1 / form 1 CD-ROMs (which are the most common). “Mode 1″ CD-ROMs are defined by a publication called the yellow book. There is also mode 1 / form 2. This forgoes the additional error detection and correction afforded by form 1 and dedicates 2336 of the 2352 sector bytes to the data payload.
CD-ROM XA (Mode 2 / Green Book)
From a software perspective, these are similar to mode 1 CD-ROMs. There are also 2 forms here. The first form gives a 2048-byte data payload while the second form yields a 2324-byte data payload.Video CD (VCD / White Book)
These are CD-ROM XA discs that carry MPEG-1 video and audio data.Photo CD (Beige Book)
This is something I have never personally dealt with. But it’s supposed to conform to the CD-ROM XA standard and probably fits into my model. It seems to date back to early in the CD-ROM era when CDs were particularly cost prohibitive.Multisession CDs (Blue Book)
Okay, I admit that this confuses me a bit. Multisession discs allow a user to burn multiple sessions to a single recordable disc. I.e., burn a lump of data, then burn another lump at a later time, and the final result will look like all the lumps were recorded as the same big lump. I remember this being incredibly useful and cost effective back when recordable CDs cost around US$10 each (vs. being able to buy a spindle of 100 CD-Rs for US$10 or less now). Studying the cdrom.h file for the Linux OS, I found a system call named CDROMMULTISESSION that returns the sector address of the start of the last session. If I were to hypothesize about how to make this fit into my model, I might guess that the TOC has some hint that the disc was recorded in multisession (which needs to be decided up front) and the CDROMMULTISESSION call is made to find the last session. Or it could be that a disc read initialization operation always leads off with the CDROMMULTISESSION query in order to determine this.I suppose I could figure out how to create a multisession disc with modern software, or possibly dig up a multisession disc from 15+ years ago, and then figure out how it should be read.
CD-i
This type puzzles my as well. I do have some CD-i discs and I thought that I could read them just fine (the last time I looked, which was many years ago). But my research for this blog post has me thinking that I might not have been seeing the entire picture when I first studied my CD-i samples. I was able to see some of the data, but sources indicate that only proper CD-i hardware is able to see all of the data on the disc (apparently, the TOC doesn’t show all of the sectors on disc).Hybrid CDs (Data + Audio)
At some point, it became a notable selling point for an audio CD to have a data track with bonus features. Even more common (particularly in the early era of CD-ROMs) were computer and console games that used the first track of a disc for all the game code and assets and the remaining tracks for beautifully rendered game audio that could also be enjoyed outside the game. Same model : TOC points to the various tracks and also makes notes about which ones are data and which are audio.There seems to be 2 distinct things described above. One type is the mixed mode CD which generally has the data in the first track and the audio in tracks 2..n. Then there is the enhanced CD, which apparently used multisession recording and put the data at the end. I think that the reasoning for this is that most audio CD player hardware would only read tracks from the first session and would have no way to see the data track. This was a positive thing. By contrast, when placing a mixed-mode CD into an audio player, the data track would be rendered as nonsense noise.
Subchannels
There’s at least one small detail that my model ignores : subchannels. CDs can encode bits of data in subchannels in sectors. This is used for things like CD-Text and CD-G. I may need to revisit this.In Summary
There’s still a lot of ground to cover, like how those sectors might be formatted to show something useful (e.g., filesystems), and how the model applies to other types of optical discs. Sounds like something for another post. -
Blackmagic SDK - OpenGL Letterbox and FFMPEG
18 mars 2015, par Marco Reisacherso I left my Idea using the DirectShow filters due to lack of Support of my needed Video formats.
The native API uses OpenGL which i am a total beginner to.
I stumbled upon the following problems :-
How to automatically apply a letterbox or pillarbox depending on the width and height of the Frame that gets passed to OpenGL (I’m using bpctrlanchormap.h to autosize everything and i get squeezed/stretched images)
-
How to record a Video of the OpenGL stream (I looked around and saw that ffmpeg should be able to do so, but I can’t get it running.) Also what would be nice is Audio recording into the same file of a microphone
I’m using the Blackmagic "Capture Preview" sample.
This is the sourcecode that initialises the OpenGL renderer and passes the Frames
#include "stdafx.h"
#include <gl></gl>gl.h>
#include "PreviewWindow.h"
PreviewWindow::PreviewWindow()
: m_deckLinkScreenPreviewHelper(NULL), m_refCount(1), m_previewBox(NULL), m_previewBoxDC(NULL), m_openGLctx(NULL)
{}
PreviewWindow::~PreviewWindow()
{
if (m_deckLinkScreenPreviewHelper != NULL)
{
m_deckLinkScreenPreviewHelper->Release();
m_deckLinkScreenPreviewHelper = NULL;
}
if (m_openGLctx != NULL)
{
wglDeleteContext(m_openGLctx);
m_openGLctx = NULL;
}
if (m_previewBoxDC != NULL)
{
m_previewBox->ReleaseDC(m_previewBoxDC);
m_previewBoxDC = NULL;
}
}
bool PreviewWindow::init(CStatic *previewBox)
{
m_previewBox = previewBox;
// Create the DeckLink screen preview helper
if (CoCreateInstance(CLSID_CDeckLinkGLScreenPreviewHelper, NULL, CLSCTX_ALL, IID_IDeckLinkGLScreenPreviewHelper (void**)&m_deckLinkScreenPreviewHelper) != S_OK)
return false;
// Initialise OpenGL
return initOpenGL();
}
bool PreviewWindow::initOpenGL()
{
PIXELFORMATDESCRIPTOR pixelFormatDesc;
int pixelFormat;
bool result = false;
//
// Here, we create an OpenGL context attached to the screen preview box
// so we can use it later on when we need to draw preview frames.
// Get the preview box drawing context
m_previewBoxDC = m_previewBox->GetDC();
if (m_previewBoxDC == NULL)
return false;
// Ensure the preview box DC uses ARGB pixel format
ZeroMemory(&pixelFormatDesc, sizeof(pixelFormatDesc));
pixelFormatDesc.nSize = sizeof(pixelFormatDesc);
pixelFormatDesc.nVersion = 1;
pixelFormatDesc.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pixelFormatDesc.iPixelType = PFD_TYPE_RGBA;
pixelFormatDesc.cColorBits = 32;
pixelFormatDesc.cDepthBits = 16;
pixelFormatDesc.cAlphaBits = 8;
pixelFormatDesc.iLayerType = PFD_MAIN_PLANE;
pixelFormat = ChoosePixelFormat(m_previewBoxDC->m_hDC, &pixelFormatDesc);
if (SetPixelFormat(m_previewBoxDC->m_hDC, pixelFormat, &pixelFormatDesc) == false)
return false;
// Create OpenGL rendering context
m_openGLctx = wglCreateContext(m_previewBoxDC->m_hDC);
if (m_openGLctx == NULL)
return false;
// Make the new OpenGL context the current rendering context so
// we can initialise the DeckLink preview helper
if (wglMakeCurrent(m_previewBoxDC->m_hDC, m_openGLctx) == FALSE)
return false;
if (m_deckLinkScreenPreviewHelper->InitializeGL() == S_OK)
result = true;
// Reset the OpenGL rendering context
wglMakeCurrent(NULL, NULL);
return result;
}
HRESULT PreviewWindow::DrawFrame(IDeckLinkVideoFrame* theFrame)
{
// Make sure we are initialised
if ((m_deckLinkScreenPreviewHelper == NULL) || (m_previewBoxDC == NULL) || (m_openGLctx == NULL))
return E_FAIL;
// First, pass the frame to the DeckLink screen preview helper
m_deckLinkScreenPreviewHelper->SetFrame(theFrame);
// Then set the OpenGL rendering context to the one we created before
wglMakeCurrent(m_previewBoxDC->m_hDC, m_openGLctx);
// and let the helper take care of the drawing
m_deckLinkScreenPreviewHelper->PaintGL();
// Last, reset the OpenGL rendering context
wglMakeCurrent(NULL, NULL);
return S_OK;
} -