
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (80)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (14065)
-
How to send large x264 NAL over RTMP ?
17 septembre 2017, par samgakI’m trying to stream video over RTMP using x264 and rtmplib in C++ on Windows.
So far I have managed to encode and stream a test video pattern consisting of animated multi-colored vertical lines that I generate in code. It’s possible to start and stop the stream, and start and stop the player, and it works every time. However, as soon as I modify it to send encoded camera frames instead of the test pattern, the streaming becomes very unreliable. It only starts <20% of the time, and stopping and restarting doesn’t work.
After searching around for answers I concluded that it must be because the NAL size is too large (my test pattern is mostly flat color so it encodes to a very small size), and there is an Ethernet packet limit of around 1400 bytes that affects it. So, I tried to make x264 only output NALs under 1200 bytes, by setting
i_slice_max_size
in my x264 setup :if (x264_param_default_preset(&param, "veryfast", "zerolatency") < 0)
return false;
param.i_csp = X264_CSP_I420;
param.i_threads = 1;
param.i_width = width; //set frame width
param.i_height = height; //set frame height
param.b_cabac = 0;
param.i_bframe = 0;
param.b_interlaced = 0;
param.rc.i_rc_method = X264_RC_ABR;
param.i_level_idc = 21;
param.rc.i_bitrate = 128;
param.b_intra_refresh = 1;
param.b_annexb = 1;
param.i_keyint_max = 25;
param.i_fps_num = 15;
param.i_fps_den = 1;
param.i_slice_max_size = 1200;
if (x264_param_apply_profile(&param, "baseline") < 0)
return false;This reduces the NAL size, but it doesn’t seem to make any difference to the reliability issues.
I’ve also tried fragmenting the NALs, using this Java code and RFC 3984 (RTP Payload Format for H.264 Video) as a reference, but it doesn’t work at all (code below), the server says "stream has stopped" immediately after it starts. I’ve tried including and excluding the NAL header (with the timestamp etc) in each fragment or just the first, but it doesn’t work for me either way.
I’m pretty sure my issue has to be with the NAL size and not PPS/SPS or anything like that (as in this question) or with my network connection or test server, because everything works fine with the test pattern.
I’m sending
NAL_PPS
andNAL_SPS
(only once), and allNAL_SLICE_IDR
andNAL_SLICE
packets. I’m ignoringNAL_SEI
and not sending it.One thing that is confusing me is that the source code that I can find on the internet that does similar things to what I want doesn’t match up with what the RFC specifies. For example, RFC 3984 section 5.3 defines the NAL octet, which should have the NAL type in the lower 5 bits and the NRI in bits 5 and 6 (bit 7 is zero). The types NAL_SLICE_IDR and NAL_SLICE have values of 5 and 1 respectively, which are the ones in table 7-1 of this document (PDF) referenced by the RFC and also the ones output by x264. But the code that actually works sets the NAL octet to 39 (0x27) and 23 (0x17), for reasons unknown to me. When implementing fragmented NALs, I’ve tried both following the spec and using the values copied over from the working code, but neither works.
Any help appreciated.
void sendNAL(unsigned char* buf, int len)
{
Logging::LogNumber("sendNAL", len);
RTMPPacket * packet;
long timeoffset = GetTickCount() - startTime;
if (buf[2] == 0x00) { //00 00 00 01
buf += 4;
len -= 4;
}
else if (buf[2] == 0x01) { //00 00 01
buf += 3;
len -= 3;
}
else
{
Logging::LogStdString("INVALID x264 FRAME!");
}
int type = buf[0] & 0x1f;
int maxNALSize = 1200;
if (len <= maxNALSize)
{
packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + len + 9);
memset(packet, 0, RTMP_HEAD_SIZE);
packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
packet->m_nBodySize = len + 9;
unsigned char *body = (unsigned char *)packet->m_body;
memset(body, 0, len + 9);
body[0] = 0x27;
if (type == NAL_SLICE_IDR) {
body[0] = 0x17;
}
body[1] = 0x01; //nal unit
body[2] = 0x00;
body[3] = 0x00;
body[4] = 0x00;
body[5] = (len >> 24) & 0xff;
body[6] = (len >> 16) & 0xff;
body[7] = (len >> 8) & 0xff;
body[8] = (len) & 0xff;
memcpy(&body[9], buf, len);
packet->m_hasAbsTimestamp = 0;
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
if (rtmp != NULL) {
packet->m_nInfoField2 = rtmp->m_stream_id;
}
packet->m_nChannel = 0x04;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nTimeStamp = timeoffset;
if (rtmp != NULL) {
RTMP_SendPacket(rtmp, packet, QUEUE_RTMP);
}
free(packet);
}
else
{
packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + maxNALSize + 90);
memset(packet, 0, RTMP_HEAD_SIZE);
// split large NAL into multiple smaller ones:
int sentBytes = 0;
bool firstFragment = true;
while (sentBytes < len)
{
// decide how many bytes to send in this fragment:
int fragmentSize = maxNALSize;
if (sentBytes + fragmentSize > len)
fragmentSize = len - sentBytes;
bool lastFragment = (sentBytes + fragmentSize) >= len;
packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
int headerBytes = firstFragment ? 10 : 2;
packet->m_nBodySize = fragmentSize + headerBytes;
unsigned char *body = (unsigned char *)packet->m_body;
memset(body, 0, fragmentSize + headerBytes);
//key frame
int NALtype = 0x27;
if (type == NAL_SLICE_IDR) {
NALtype = 0x17;
}
// Set FU-A indicator
body[0] = (byte)((NALtype & 0x60) & 0xFF); // FU indicator NRI
body[0] += 28; // 28 = FU - A (fragmentation unit A) see RFC: https://tools.ietf.org/html/rfc3984
// Set FU-A header
body[1] = (byte)(NALtype & 0x1F); // FU header type
body[1] += (firstFragment ? 0x80 : 0) + (lastFragment ? 0x40 : 0); // Start/End bits
body[2] = 0x01; //nal unit
body[3] = 0x00;
body[4] = 0x00;
body[5] = 0x00;
body[6] = (len >> 24) & 0xff;
body[7] = (len >> 16) & 0xff;
body[8] = (len >> 8) & 0xff;
body[9] = (len) & 0xff;
//copy data
memcpy(&body[headerBytes], buf + sentBytes, fragmentSize);
packet->m_hasAbsTimestamp = 0;
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
if (rtmp != NULL) {
packet->m_nInfoField2 = rtmp->m_stream_id;
}
packet->m_nChannel = 0x04;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nTimeStamp = timeoffset;
if (rtmp != NULL) {
RTMP_SendPacket(rtmp, packet, TRUE);
}
sentBytes += fragmentSize;
firstFragment = false;
}
free(packet);
}
} -
Recording voice using HTML5 and processing it with ffmpeg
22 mars 2015, par user3789242I need to use ffmpeg in my javascript/HTML5 project which allows the user to select the format he wants the audio to open with.I don’t know anything about ffmpeg and I’ve been doing lots of research I don’t know how to use it in my project. I found an example https://github.com/sopel39/audioconverter.js but the problem how can I install the ffmpeg.js which is 8 mg to m project. please if someone can help me I’ll be very thankfull
here is my full code :the javascript page :
// variables
var leftchannel = [];
var rightchannel = [];
var recorder = null;
var recording = false;
var recordingLength = 0;
var volume = null;
var audioInput = null;
var sampleRate = 44100;
var audioContext = null;
var context = null;
var outputString;
if (!navigator.getUserMedia)
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
if (navigator.getUserMedia){
navigator.getUserMedia({audio:true}, success, function(e) {
alert('Error capturing audio.');
});
} else alert('getUserMedia not supported in this browser.');
function getVal(value)
{
// if R is pressed, we start recording
if ( value == "record"){
recording = true;
// reset the buffers for the new recording
leftchannel.length = rightchannel.length = 0;
recordingLength = 0;
document.getElementById('output').innerHTML="Recording now...";
// if S is pressed, we stop the recording and package the WAV file
} else if ( value == "stop" ){
// we stop recording
recording = false;
document.getElementById('output').innerHTML="Building wav file...";
// we flat the left and right channels down
var leftBuffer = mergeBuffers ( leftchannel, recordingLength );
var rightBuffer = mergeBuffers ( rightchannel, recordingLength );
// we interleave both channels together
var interleaved = interleave ( leftBuffer, rightBuffer );
var buffer = new ArrayBuffer(44 + interleaved.length * 2);
var view = new DataView(buffer);
// RIFF chunk descriptor
writeUTFBytes(view, 0, 'RIFF');
view.setUint32(4, 44 + interleaved.length * 2, true);
writeUTFBytes(view, 8, 'WAVE');
// FMT sub-chunk
writeUTFBytes(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
// stereo (2 channels)
view.setUint16(22, 2, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 4, true);
view.setUint16(32, 4, true);
view.setUint16(34, 16, true);
// data sub-chunk
writeUTFBytes(view, 36, 'data');
view.setUint32(40, interleaved.length * 2, true);
var lng = interleaved.length;
var index = 44;
var volume = 1;
for (var i = 0; i < lng; i++){
view.setInt16(index, interleaved[i] * (0x7FFF * volume), true);
index += 2;
}
var blob = new Blob ( [ view ], { type : 'audio/wav' } );
// let's save it locally
document.getElementById('output').innerHTML='Handing off the file now...';
var url = (window.URL || window.webkitURL).createObjectURL(blob);
var li = document.createElement('li');
var au = document.createElement('audio');
var hf = document.createElement('a');
au.controls = true;
au.src = url;
hf.href = url;
hf.download = 'audio_recording_' + new Date().getTime() + '.wav';
hf.innerHTML = hf.download;
li.appendChild(au);
li.appendChild(hf);
recordingList.appendChild(li);
}
}
function success(e){
audioContext = window.AudioContext || window.webkitAudioContext;
context = new audioContext();
volume = context.createGain();
// creates an audio node from the microphone incoming stream(source)
source = context.createMediaStreamSource(e);
// connect the stream(source) to the gain node
source.connect(volume);
var bufferSize = 2048;
recorder = context.createScriptProcessor(bufferSize, 2, 2);
//node for the visualizer
analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0.3;
analyser.fftSize = 512;
splitter = context.createChannelSplitter();
//when recording happens
recorder.onaudioprocess = function(e){
if (!recording) return;
var left = e.inputBuffer.getChannelData (0);
var right = e.inputBuffer.getChannelData (1);
leftchannel.push (new Float32Array (left));
rightchannel.push (new Float32Array (right));
recordingLength += bufferSize;
// get the average for the first channel
var array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
var c=document.getElementById("myCanvas");
var ctx = c.getContext("2d");
// clear the current state
ctx.clearRect(0, 0, 1000, 325);
var gradient = ctx.createLinearGradient(0,0,0,300);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.75,'#ff0000');
gradient.addColorStop(0.25,'#ffff00');
gradient.addColorStop(0,'#ffffff');
// set the fill style
ctx.fillStyle=gradient;
drawSpectrum(array);
function drawSpectrum(array) {
for ( var i = 0; i < (array.length); i++ ){
var value = array[i];
ctx.fillRect(i*5,325-value,3,325);
}
}
}
function getAverageVolume(array) {
var values = 0;
var average;
var length = array.length;
// get all the frequency amplitudes
for (var i = 0; i < length; i++) {
values += array[i];
}
average = values / length;
return average;
}
// we connect the recorder(node to destination(speakers))
volume.connect(splitter);
splitter.connect(analyser, 0, 0);
analyser.connect(recorder);
recorder.connect(context.destination);
}
function mergeBuffers(channelBuffer, recordingLength){
var result = new Float32Array(recordingLength);
var offset = 0;
var lng = channelBuffer.length;
for (var i = 0; i < lng; i++){
var buffer = channelBuffer[i];
result.set(buffer, offset);
offset += buffer.length;
}
return result;
}
function interleave(leftChannel, rightChannel){
var length = leftChannel.length + rightChannel.length;
var result = new Float32Array(length);
var inputIndex = 0;
for (var index = 0; index < length; ){
result[index++] = leftChannel[inputIndex];
result[index++] = rightChannel[inputIndex];
inputIndex++;
}
return result;
}
function writeUTFBytes(view, offset, string){
var lng = string.length;
for (var i = 0; i < lng; i++){
view.setUint8(offset + i, string.charCodeAt(i));
}
}and here is the html code :
<code class="echappe-js"><script src="http://stackoverflow.com/feeds/tag/js/functions.js"></script>
-
Re-solving My Search Engine Problem
14 years ago, I created a web database of 8-bit Nintendo Entertainment System games. To make it useful, I developed a very primitive search feature.
A few months ago, I decided to create a web database of video game music. To make it useful, I knew it would need to have a search feature. I realized I needed to solve the exact same problem again.
Requirements
The last time I solved this problem, I came up with an excruciatingly naïve idea. Hey, it worked. I really didn’t want to deploy the same solution again because it felt so silly the first time. Surely there are many better ways to solve it now ? Many different workable software solutions that do all the hard work for me ?The first time I attacked this, it was 1998 and hosting resources were scarce. On my primary web host I was able to put static HTML pages, perhaps with server side includes. The web host also offered dynamic scripting capabilities via something called htmlscript (a.k.a. MIVA Script). I had a secondary web host in my ISP which allowed me to host conventional CGI scripts on a Unix host, so that’s where I hosted the search function (Perl CGI script accessing a key/value data store file).
Nowadays, sky’s the limit. Any type of technology you want to deploy should be tractable. Still, a key requirement was that I didn’t want to pay for additional hosting resources for this silly little side project. That leaves me with options that my current shared web hosting plan allows, which includes such advanced features as PHP, Perl and Python scripts. I can also access MySQL.
Candidates
There are a lot of mature software packages out there which can index and search data and be plugged into a website. But a lot of them would be unworkable on my web hosting plan due to language or library package limitations. Further, a lot of them feel like overkill. At the most basic level, all I really want to do is map a series of video game titles to URLs in a website.Based on my research, Lucene seems to hold a fair amount of mindshare as an open source indexing and search solution. But I was unsure of my ability to run it on my hosting plan. I think MySQL does some kind of full text search, so I could have probably made a solution around that. Again, it just feels like way more power than I need for this project.
I used Swish-e once about 3 years ago for a little project. I wasn’t confident of my ability to run that on my server either. It has a Perl API but it requires custom modules.
My quest for a search solution grew deep enough that I started perusing a textbook on information retrieval techniques in preparation for possibly writing my own solution from scratch. However, in doing so, I figured out how I might subvert an existing solution to do what I want.
Back to Swish-e
Again, all I wanted to do was pull data out of a database and map that data to a URL in a website. Reading the Swish-e documentation, I learned that the software supports a mode specifically tailored for this. Rather than asking Swish-e to index a series of document files living on disk, you can specify a script for Swish-e to run and the script will generate what appears to be a set of phantom documents for Swish-e to index.
When I ’add’ a game music file to the game music website, I have a scripts that scrape the metadata (game title, system, song titles, composers, company, copyright, the original file name on disk, even the ripper/dumper who extracted the chiptune in the first place) and store it all in an SQLite database. When it’s time to update the database, another script systematically generates a series of pseudo-documents that spell out the metadata for each game and prefix each document with a path name. Searching for a term in the index returns a lists of paths that contain the search term. Thus, it makes sense for that path to be a site URL.
But what about a web script which can search this Swish-e index ? That’s when I noticed Swish-e’s C API and came up with a crazy idea : Write the CGI script directly in C. It feels like sheer madness (or at least the height of software insecurity) to write a CGI script directly in C in this day and age. But it works (with the help of cgic for input processing), just as long as I statically link the search script with libswish-e.a (and libz.a). The web host is an x86 machine, after all.
I’m not proud of what I did here— I’m proud of how little I had to do here. The searching CGI script is all of about 30 lines of C code. The one annoyance I experienced while writing it is that I had to consult the Swish-e source code to learn how to get my search results (the "swishdocpath" key — or any other key — for SwishResultPropertyStr() is not documented). Also, the C program just does the simplest job possible, only querying the term in the index and returning the results in plaintext, in order of relevance, to the client-side JavaScript code which requested them. JavaScript gets the job of sorting and grouping the results for presentation.
Tuning the Search
Almost immediately, I noticed that the search engine could not find one of my favorite SNES games, U.N. Squadron. That’s because all of its associated metadata names Area 88, the game’s original title. Thus, I had to modify the metadata database to allow attaching somewhat free-form tags to games in order to compensate. In this case, an alias title would show up in the game’s pseudo-document.Roman numerals are still a thorn in my side, just as they were 14 years ago in my original iteration. I dealt with it back then by converting all numbers to Roman numerals during the index and searching processes. I’m not willing to do that for this case and I’m still looking for a good solution.
Another annoying problem deals with Mega Man, a popular franchise. The proper spelling is 2 words but it’s common for people to mash it into one word, Megaman (see also : Spider-Man, Spiderman, Spider Man). The index doesn’t gracefully deal with that and I have some hacks in place to cope for the time being.
Positive Results
I’m pleased with the results so far, and so are the users I have heard from. I know one user expressed amazement that a search for Castlevania turned up Akumajou Densetsu, the Japanese version of Castlevania III : Dracula’s Curse. This didn’t surprise me because I manually added a hint for that mapping. (BTW, if you are a fan of Castlevania III, definitely check out the Akumajou Densetsu soundtrack which has an upgraded version of the same soundtrack using special audio channels.)I was a little more surprised when a user announced that searching for ’probotector’ correctly turned up Contra : Hard Corps. I looked into why this was. It turns out that the original chiptune filename was extremely descriptive : "Contra - Hard Corps [Probotector] (1994-08-08)(Konami)". The filenames themselves often carry a bunch of useful metadata which is why it’s important to index those as well.
And of course, many rippers, dumpers, and taggers have labored for over a decade to lovingly tag these songs with as much composer information as possible, which all gets indexed. The search engine gets a lot of compliments for its ability to find many songs written by favorite composers.