
Recherche avancée
Autres articles (80)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (10707)
-
avformat/electronicarts : add option to return alpha channel in the main video stream...
13 novembre 2022, par Marton Balintavformat/electronicarts : add option to return alpha channel in the main video stream in VP6A codec
VP6 alpha in EA format is a second VP6 encoded video stream where only the Y
component is used and is interpreted as the alpha channel of the first VP6
stream. The alpha VP6 stream is muxed separately from the main VP6 stream, has
its own stream headers and packet headers. In theory the two streams might not
even have the same resolution (although most likely that is not something that
is seen or supported in the wild), but the format is capable of doing it.Merged VP6 alpha (also known as the VP6A codec) means that a packet of the
video stream contains the corresponding packet of both VP6 substreams like
this :OffsetOfAlpha, DataPacket, AlphaDataPacket
So data and alpha data of a frame is merged to a single packet, this is how VP6
video with alpha is muxed in FLV and SWF.The first approach is more like how the demuxer sees data in the EA format,
unfortunately it is different to what the FLV or SWF format expects, so -
having no better place for it in the framework - I decided to do an optional
format conversion in the EA demuxer.Signed-off-by : Marton Balint <cus@passwd.hu>
-
Turn off sw_scale conversion to planar YUV 32 byte alignment requirements
8 novembre 2022, par flanselI am experiencing artifacts on the right edge of scaled and converted images when converting into planar YUV pixel formats with sw_scale. I am reasonably sure (although I can not find it anywhere in the documentation) that this is because sw_scale is using an optimization for 32 byte aligned lines, in the destination. However I would like to turn this off because I am using sw_scale for image composition, so even though the destination lines may be 32 byte aligned, the output image may not be.


Example.


Full output frame is 1280x720 yuv422p10le. (this is 32 byte aligned)
However into the top left corner I am scaling an image with an outwidth of 1280 / 3 = 426.
426 in this format is not 32 byte aligned, but I believe sw_scale sees that the output linesize is 32 byte aligned and overwrites the width of 426 putting garbage in the next 22 bytes of data thinking this is simply padding when in my case this is displayable area.


This is why I need to actually disable this optimization or somehow trick sw_scale into believing it does not apply while keeping intact the way the program works, which is otherwise fine.


I have tried adding extra padding to the destination lines so they are no longer 32 byte aligned,
this did not help as far as I can tell.


Edit with code Example. Rendering omitted for ease of use.
Also here is a similar issue, unfortunately as I stated there fix will not work for my use case. https://github.com/obsproject/obs-studio/pull/2836


Use the commented line of code to swap between a output width which is and isnt 32 byte aligned.


#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
#include "libavutil/pixelutils.h"
#include "libavutil/pixfmt.h"
#include "libavutil/pixdesc.h"
#include 
#include 
#include 

int main(int argc, char **argv) {

/// Set up a 1280x720 window, and an item with 1/3 width and height of the window.
int window_width, window_height, item_width, item_height;
window_width = 1280;
window_height = 720;
item_width = (window_width / 3);
item_height = (window_height / 3);

int item_out_width = item_width;
/// This line sets the item width to be 32 byte aligned uncomment to see uncorrupted results
/// Note %16 because outformat is 2 bytes per component
//item_out_width -= (item_width % 16);

enum AVPixelFormat outformat = AV_PIX_FMT_YUV422P10LE;
enum AVPixelFormat informat = AV_PIX_FMT_UYVY422;
int window_lines[4] = {0};
av_image_fill_linesizes(window_lines, outformat, window_width);

uint8_t *window_planes[4] = {0};
window_planes[0] = calloc(1, window_lines[0] * window_height);
window_planes[1] = calloc(1, window_lines[1] * window_height);
window_planes[2] = calloc(1, window_lines[2] * window_height); /// Fill the window with all 0s, this is green in yuv.


int item_lines[4] = {0};
av_image_fill_linesizes(item_lines, informat, item_width);

uint8_t *item_planes[4] = {0};
item_planes[0] = malloc(item_lines[0] * item_height);
memset(item_planes[0], 100, item_lines[0] * item_height);

struct SwsContext *ctx;
ctx = sws_getContext(item_width, item_height, informat,
 item_out_width, item_height, outformat, SWS_FAST_BILINEAR, NULL, NULL, NULL);

/// Check a block in the normal region
printf("Pre scale normal region %d %d %d\n", (int)((uint16_t*)window_planes[0])[0], (int)((uint16_t*)window_planes[1])[0],
 (int)((uint16_t*)window_planes[2])[0]);

/// Check a block in the corrupted region (should be all zeros) These values should be out of the converted region
int corrupt_offset_y = (item_out_width + 3) * 2; ///(item_width + 3) * 2 bytes per component Y PLANE
int corrupt_offset_uv = (item_out_width + 3); ///(item_width + 3) * (2 bytes per component rshift 1 for horiz scaling) U and V PLANES

printf("Pre scale corrupted region %d %d %d\n", (int)(*((uint16_t*)(window_planes[0] + corrupt_offset_y))),
 (int)(*((uint16_t*)(window_planes[1] + corrupt_offset_uv))), (int)(*((uint16_t*)(window_planes[2] + corrupt_offset_uv))));
sws_scale(ctx, (const uint8_t**)item_planes, item_lines, 0, item_height,window_planes, window_lines);

/// Preform same tests after scaling
printf("Post scale normal region %d %d %d\n", (int)((uint16_t*)window_planes[0])[0], (int)((uint16_t*)window_planes[1])[0],
 (int)((uint16_t*)window_planes[2])[0]);
printf("Post scale corrupted region %d %d %d\n", (int)(*((uint16_t*)(window_planes[0] + corrupt_offset_y))),
 (int)(*((uint16_t*)(window_planes[1] + corrupt_offset_uv))), (int)(*((uint16_t*)(window_planes[2] + corrupt_offset_uv))));

return 0;



}


Example Output:

//No alignment
Pre scale normal region 0 0 0
Pre scale corrupted region 0 0 0
Post scale normal region 400 400 400
Post scale corrupted region 512 36865 36865

//With alignment
Pre scale normal region 0 0 0
Pre scale corrupted region 0 0 0
Post scale normal region 400 400 400
Post scale corrupted region 0 0 0



-
Able to get HLS to play on VLC viewer but not on browser
6 mai 2022, par Tamotheeso i have been trying to get ipcamera to connect to a react app and show live video. i found and followed this tutorial https://www.youtube.com/watch?v=-a5MAaEaizU&t=185s .


i am able to get vlc viewer to run the hls server link and display what the camera sees. however when i plug the link into my code or a browser hls viewer like https://hls-js.netlify.app/demo/ , the m3u8 link does not play. there is no error and when i inspect the network, i do receive the m3u8 and .ts links.
this is what i see on the network portion


ffmpeg -i rtsp://admin:Password1234@192.168.1.64:554/Streaming/Channels/101 -fflags flush_packets -max_delay 2 -flags -global_header -hls_time 2 -hls_list_size 3 -vcodec copy -y ./index.m3u8



this is the ffmpeg command that i ran to convert my rtsp output to hls.


var http = require('http');
var fs = require('fs');

const port = 1234

http.createServer(function (request, response) {
console.log('request starting...');

var filePath = '.' + request.url;

fs.readFile(filePath, function(error, content) {
 response.writeHead(200, { 'Access-Control-Allow-Origin': '*' });
 if (error) {
 if(error.code == 'ENOENT'){
 fs.readFile('./404.html', function(error, content) {
 response.end(content, 'utf-8');
 });
 }
 else {
 response.writeHead(500);
 response.end('Sorry, check with the site admin for error: '+error.code+' ..\n');
 response.end(); 
 }
 }
 else {
 response.end(content, 'utf-8');
 }
});

}).listen(port);
console.log(`Server running at http://127.0.0.1:${port}/`);



this is the code for the hls server that receives the request and sends the user the m3u8 and ts files.






and i'm trying to play the link like this using react-hls-player


hope this is not a stupid question as i'm a beginner and hope that someone could help me with this problem.