
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (45)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Les statuts des instances de mutualisation
13 mars 2010, parPour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)
Sur d’autres sites (8606)
-
Can't play rtp stream from ffmpeg/avconv, no data received
6 mars 2014, par Foo BarazzI started avserver on my Raspberry Pi, webcam attached I read from /dev/video0 with
pi@raspberrypi $ avconv -f video4linux2 -i /dev/video0 -vcodec mpeg2video -r 25 - pix_fmt yuv420p -me_method epzs -b 2600k -bt 256k -f rtp rtp://192.168.0.124:8090
avconv version 0.8.6-6:0.8.6-1+rpi1, Copyright (c) 2000-2013 the Libav developers
built on Mar 31 2013 13:58:10 with gcc 4.6.3
[video4linux2 @ 0x17c1720] Estimating duration from bitrate, this may be inaccurate
Input #0, video4linux2, from '/dev/video0':
Duration: N/A, start: 615.594215, bitrate: 36864 kb/s
Stream #0.0: Video: rawvideo, yuyv422, 320x240, 36864 kb/s, 30 tbr, 1000k tbn, 30 tbc
[buffer @ 0x17c16e0] w:320 h:240 pixfmt:yuyv422
[avsink @ 0x17c2f00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'
[scale @ 0x17c34c0] w:320 h:240 fmt:yuyv422 -> w:320 h:240 fmt:yuv420p flags:0x4
Output #0, rtp, to 'rtp://192.168.0.124:8090':
Metadata:
encoder : Lavf53.21.1
Stream #0.0: Video: mpeg2video, yuv420p, 320x240, q=2-31, 2600 kb/s, 90k tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo -> mpeg2video)
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 192.168.0.124
t=0 0
a=tool:libavformat 53.21.1
m=video 8090 RTP/AVP 32
b=AS:2600
Press ctrl-c to stop encoding
frame= 576 fps= 25 q=2.0 size= 2133kB time=23.00 bitrate= 759.8kbits/s dup=390 drop=0
frame= 590 fps= 25 q=2.0 size= 2191kB time=23.56 bitrate= 762.0kbits/s dup=400 drop=0
frame= 1320 fps= 25 q=2.0 size= 4932kB time=52.76 bitrate= 765.8kbits/s dup=908 drop=0
...Seems to work fine, it reads data from the webcam.
Now I'm trying to simply play with ffplay from my Mac with
$ ffplay rtp://192.168.0.124:8090
ffplay version 1.2.4 Copyright (c) 2003-2013 the FFmpeg developers
built on Mar 1 2014 15:18:21 with Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/1.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid --enable-libfreetype --enable-libtheora --enable-libvorbis --enable-libvpx --enable-librtmp --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-aacenc --enable-libass --enable-ffplay --enable-libspeex --enable-libschroedinger --enable-libfdk-aac --enable-libopus --enable-frei0r --enable-libopenjpeg --extra-cflags='-I/usr/local/Cellar/openjpeg/1.5.1/include/openjpeg-1.5 '
libavutil 52. 18.100 / 52. 18.100
libavcodec 54. 92.100 / 54. 92.100
libavformat 54. 63.104 / 54. 63.104
libavdevice 54. 3.103 / 54. 3.103
libavfilter 3. 42.103 / 3. 42.103
libswscale 2. 2.100 / 2. 2.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 2.100 / 52. 2.100
nan A-V: 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
nan A-V: 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
nan A-V: 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
...The video doesn't open and it seems to not reading any data from the Raspberry Pi.
I use the default configuration for avserver.
The webcam is definitely working as I managed to just write out images with avconv from it.What did I miss ?
-
FFmpeg player backporting to Android 2.1 - one more problem
22 avril 2024, par tretdmI looked for a lot of information about how to build and use FFmpeg in early versions of Android, looked at the source codes of players from 2011-2014 and was able to easily build FFmpeg 4.0.4 and 3.1.4 on the NDKv5 platform. I have highlighted the main things for this purpose :


- 

<android></android>bitmap.h>
and<android></android>native_window.h>
before Android 2.2 (API Level 8) such a thing did not exist- this requires some effort to implement buffer management for A/V streams, since in practice, when playing video, the application silently crashed after a few seconds due to overflow (below code example in C++ and Java)
- FFmpeg - imho, the only way to support a sufficient number of codecs that are not officially included in Android 2.1 and above








void decodeVideoFromPacket(JNIEnv *env, jobject instance,
 jclass mplayer_class, AVPacket avpkt, 
 int total_frames, int length) {
 AVFrame *pFrame = NULL
 AVFrame *pFrameRGB = NULL;
 pFrame = avcodec_alloc_frame();
 pFrameRGB = avcodec_alloc_frame();
 int frame_size = avpicture_get_size(PIX_FMT_RGB32, gVideoCodecCtx->width, gVideoCodecCtx->height);
 unsigned char* buffer = (unsigned char*)av_malloc((size_t)frame_size * 3);
 if (!buffer) {
 av_free(pFrame);
 av_free(pFrameRGB);
 return;
 }
 jbyteArray buffer2;
 jmethodID renderVideoFrames = env->GetMethodID(mplayer_class, "renderVideoFrames", "([BI)V");
 int frameDecoded;
 avpicture_fill((AVPicture*) pFrame,
 buffer,
 gVideoCodecCtx->pix_fmt,
 gVideoCodecCtx->width,
 gVideoCodecCtx->height
 );

 if (avpkt.stream_index == gVideoStreamIndex) { // If video stream found
 int size = avpkt.size;
 total_frames++;
 struct SwsContext *img_convert_ctx = NULL;
 avcodec_decode_video2(gVideoCodecCtx, pFrame, &frameDecoded, &avpkt);
 if (!frameDecoded || pFrame == NULL) {
 return;
 }

 try {
 PixelFormat pxf;
 // RGB565 by default for Android Canvas in pre-Gingerbread devices.
 if(android::get_android_api_version(env) >= ANDROID_API_CODENAME_GINGERBREAD) {
 pxf = PIX_FMT_BGR32;
 } else {
 pxf = PIX_FMT_RGB565;
 }

 int rgbBytes = avpicture_get_size(pxf, gVideoCodecCtx->width,
 gVideoCodecCtx->height);

 // Converting YUV to RGB frame & RGB frame to char* buffer 
 
 buffer = convertYuv2Rgb(pxf, pFrame, rgbBytes); // result of av_image_copy_to_buffer()

 if(buffer == NULL) {
 return;
 }

 buffer2 = env->NewByteArray((jsize) rgbBytes);
 env->SetByteArrayRegion(buffer2, 0, (jsize) rgbBytes,
 (jbyte *) buffer);
 env->CallVoidMethod(instance, renderVideoFrames, buffer2, rgbBytes);
 env->DeleteLocalRef(buffer2);
 free(buffer);
 } catch (...) {
 if (debug_mode) {
 LOGE(10, "[ERROR] Render video frames failed");
 return;
 }
 }
 }
}



private void renderVideoFrames(final byte[] buffer, final int length) {
 new Thread(new Runnable() {
 @Override
 public void run() {
 Canvas c;
 VideoTrack track = null;
 for (int tracks_index = 0; tracks_index < tracks.size(); tracks_index++) {
 if (tracks.get(tracks_index) instanceof VideoTrack) {
 track = (VideoTrack) tracks.get(tracks_index);
 }
 }
 if (track != null) {
 int frame_width = track.frame_size[0];
 int frame_height = track.frame_size[1];
 if (frame_width > 0 && frame_height > 0) {
 try {
 // RGB_565 == 65K colours (16 bit)
 // RGB_8888 == 16.7M colours (24 bit w/ alpha ch.)
 int bpp = Build.VERSION.SDK_INT > 9 ? 16 : 24;
 Bitmap.Config bmp_config =
 bpp == 24 ? Bitmap.Config.RGB_565 : Bitmap.Config.ARGB_8888;
 Paint paint = new Paint();
 if(buffer != null && holder != null) {
 holder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
 if((c = holder.lockCanvas()) == null) {
 Log.d(MPLAY_TAG, "Lock canvas failed");
 return;
 }
 ByteBuffer bbuf =
 ByteBuffer.allocateDirect(minVideoBufferSize);
 bbuf.rewind();
 for(int i = 0; i < buffer.length; i++) {
 bbuf.put(i, buffer[i]);
 }
 bbuf.rewind();

 // The approximate location where the application crashed.
 Bitmap bmp = Bitmap.createBitmap(frame_width, frame_height, bmp_config);
 bmp.copyPixelsFromBuffer(bbuf);
 
 float aspect_ratio = (float) frame_width / (float) frame_height;
 int scaled_width = (int)(aspect_ratio * (c.getHeight()));
 c.drawBitmap(bmp,
 null,
 new RectF(
 ((c.getWidth() - scaled_width) / 2), 0,
 ((c.getWidth() - scaled_width) / 2) + scaled_width,
 c.getHeight()),
 null);
 holder.unlockCanvasAndPost(c);
 bmp.recycle();
 bbuf.clear();
 } else {
 Log.d(MPLAY_TAG, "Video frame buffer is null");
 }
 } catch (Exception ex) {
 ex.printStackTrace();
 } catch (OutOfMemoryError oom) {
 oom.printStackTrace();
 stop();
 }
 }
 }
 }
 }).start();
 }



Exception (tested in Android 4.1.2 emulator) :


E/dalvikvm-heap: Out of memory on a 1228812-byte allocation
I/dalvikvm: "Thread-495" prio=5 tid=21 RUNNABLE
 ................................................
 at android.graphics.Bitmap.nativeCreate(Native Method)
 at android.graphics.Bitmap.createBitmap(Bitmap.java:640)
 at android.graphics.Bitmap.createBitmap(Bitmap.java:620)
 at [app_package_name].MediaPlayer$5.run(MediaPlayer.java:406)
 at java.lang.Thread.run(Thread.java:856)



For clarification : I first compiled FFmpeg 0.11.x on a virtual machine with Ubuntu 12.04 LTS from my written build script, looked for player examples suitable for Android below 2.2 (there is little information about them, unfortunately) and opened the file on the player and after showing the first frames it crashed into a stack or buffer overflow, on I put off developing the player for some time.


Is there anything ready-made that, as a rule, fits into one C++ file and takes into account all the nuances of backporting ? Thanks in advance.


-
How to fix laggy ffmpeg screen and audio capture ?
26 juillet 2022, par Wh0r00tI am using
ffmpeg
to capture the screen along with audio.

The
ffmpeg
command that i tried is

ffmpeg -y \
 -f x11grab \
 -framerate 60 \
 -s 1366x768 \
 -i :0.0 \
 -f alsa -i default -ac 2 \
 -r 30 \
 -c:v h264 -crf 0 -preset ultrafast -c:a vorbis -strict experimental \
 "$HOME/Videos/$fname-$(date '+%y%m%d-%H%M-%S').mkv"



The stdout of the
ffmpeg
https://pastebin.com/Qmi5TMKv

ffmpeg version n5.0.1 Copyright (c) 2000-2022 the FFmpeg developers
 built with gcc 12.1.0 (GCC)
 configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-shared --enable-version3
 libavutil 57. 17.100 / 57. 17.100
 libavcodec 59. 18.100 / 59. 18.100
 libavformat 59. 16.100 / 59. 16.100
 libavdevice 59. 4.100 / 59. 4.100
 libavfilter 8. 24.100 / 8. 24.100
 libswscale 6. 4.100 / 6. 4.100
 libswresample 4. 3.100 / 4. 3.100
 libpostproc 56. 3.100 / 56. 3.100
[x11grab @ 0x561faf77eb00] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, x11grab, from ':0.0':
 Duration: N/A, start: 1658814267.169414, bitrate: 2014248 kb/s
 Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1366x768, 2014248 kb/s, 60 fps, 1000k tbr, 1000k tbn
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, alsa, from 'default':
 Duration: N/A, start: 1658814267.230653, bitrate: 1536 kb/s
 Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
 Stream #1:0 -> #0:1 (pcm_s16le (native) -> vorbis (native))
Press [q] to stop, [?] for help
[libx264 @ 0x561faf7d4300] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX XOP FMA3 BMI1
[libx264 @ 0x561faf7d4300] profile High 4:4:4 Predictive, level 3.2, 4:4:4, 8-bit
[libx264 @ 0x561faf7d4300] 264 - core 164 r3081 19856cc - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=0 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=0 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=cqp mbtree=0 qp=0
[alsa @ 0x561faf78a940] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
Output #0, matroska, to '/home/earth/Videos/-220726-1114-27.mkv':
 Metadata:
 encoder : Lavf59.16.100
 Stream #0:0: Video: h264 (H264 / 0x34363248), yuv444p(tv, progressive), 1366x768, q=2-31, 30 fps, 1k tbn
 Metadata:
 encoder : Lavc59.18.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 Stream #0:1: Audio: vorbis (oV[0][0] / 0x566F), 48000 Hz, stereo, fltp
 Metadata:
 encoder : Lavc59.18.100 vorbis
[vorbis @ 0x561faf7d5500] Queue input is backward in time0 bitrate=N/A speed= 0x
frame= 153 fps= 31 q=-1.0 Lsize= 2295kB time=00:00:05.06 bitrate=3709.5kbits/s dup=0 drop=150 speed=1.01x
video:2282kB audio:7kB subtitle:0kB other streams:0kB global headers:3kB muxing overhead: 0.281689%
[libx264 @ 0x561faf7d4300] frame I:1 Avg QP: 0.00 size:381729
[libx264 @ 0x561faf7d4300] frame P:152 Avg QP: 0.00 size: 12857
[libx264 @ 0x561faf7d4300] mb I I16..4: 100.0% 0.0% 0.0%
[libx264 @ 0x561faf7d4300] mb P I16..4: 56.3% 0.0% 0.0% P16..4: 0.1% 0.0% 0.0% 0.0% 0.0% skip:43.6%
[libx264 @ 0x561faf7d4300] coded y,u,v intra: 1.6% 1.6% 1.6% inter: 0.2% 0.2% 0.2%
[libx264 @ 0x561faf7d4300] i16 v,h,dc,p: 99% 1% 0% 0%
[libx264 @ 0x561faf7d4300] kb/s:3664.27
Exiting normally, received signal 15.



I am using the preset ultrafast because I read that it helps not to compress the video too much.
The output of the recorded test file using ffmpeg is as below.


(+) Video --vid=1 (h264 1366x768 30.000fps)
 (+) Audio --aid=1 (vorbis 2ch 48000Hz)
AO: [pulse] 48000Hz stereo 2ch float
VO: [gpu] 1366x768 yuv444p
AV: 00:00:03 / 00:00:19 (17%) A-V: 0.000
[mkv] Discarding potentially broken or useless index.
AV: 00:00:14 / 00:00:19 (73%) A-V: 0.000

Exiting... (Quit)



The recording works but there is a audio lag. If I record the same using
simplescreenrecorder
with the same settings like,

audio backend - alsa


source - default


audio codec - vorbis


video codec - h.264


container - matroska


preset - superfast


The
simplescreenrecorder
log https://pastebin.com/83hMMRQF

[PageRecord::StartPage] Starting page ...
[PageRecord::StartPage] Started page.
[PageRecord::StartOutput] Starting output ...
[PageRecord::StartOutput] Output file: /home/earth/Videos/simplescreenrecorder-2022-07-26_11.18.13.mkv
[Muxer::Init] Using format matroska (Matroska).
[Muxer::AddStream] Using codec libx264 (libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10).
[VideoEncoder::PrepareStream] Using pixel format nv12.
[libx264 @ 0x563436cbfd40] using SAR=1/1
[libx264 @ 0x563436cbfd40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX XOP FMA3 BMI1
[libx264 @ 0x563436cbfd40] profile High, level 3.2, 4:2:0, 8-bit
[libx264 @ 0x563436cbfd40] 264 - core 164 r3081 19856cc - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x3 me=dia subme=1 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 pb_ratio=1.30 aq=1:1.00
[Muxer::AddStream] Using codec libvorbis (libvorbis).
[BaseEncoder::EncoderThread] Encoder thread started.
[AudioEncoder::PrepareStream] Using sample format f32p.
[BaseEncoder::EncoderThread] Encoder thread started.
[Muxer::MuxerThread] Muxer thread started.
[PageRecord::StartOutput] Started output.
[Synchronizer::SynchronizerThread] Synchronizer thread started.
[PageRecord::StartInput] Starting input ...
[X11Input::Init] Using X11 shared memory.
[X11Input::Init] Detecting screen configuration ...
[X11Input::Init] Screen 0: x1 = 0, y1 = 0, x2 = 1366, y2 = 768
[X11Input::InputThread] Input thread started.
[ALSAInput::InputThread] Using sample format s16.
[PageRecord::StartInput] Started input.
[ALSAInput::InputThread] Input thread started.
[FastResampler::Resample] Resample ratio is 1.0000 (was 0.0000).
[PageRecord::StopOutput] Stopping output ...
[PageRecord::StopOutput] Stopped output.
[PageRecord::StopInput] Stopping input ...
[X11Input::~X11Input] Stopping input thread ...
[X11Input::InputThread] Input thread stopped.
[ALSAInput::~ALSAInput] Stopping input thread ...
[ALSAInput::InputThread] Input thread stopped.
[PageRecord::StopInput] Stopped input.



It works perfectly without any lag whatsoever. The output of the recorded test file using simplescreenrecorder is as below.


(+) Video --vid=1 (h264 1366x768)
 (+) Audio --aid=1 (vorbis 2ch 48000Hz)
AO: [pulse] 48000Hz stereo 2ch float
VO: [gpu] 1366x768 yuv420p
AV: 00:00:01 / 00:00:17 (7%) A-V: 0.000
[mkv] Discarding potentially broken or useless index.
AV: 00:00:08 / 00:00:17 (47%) A-V: 0.000

Exiting... (Quit)



The only difference that I saw between these two recordings is
VO: [gpu] 1366x768 yuv444p

VO: [gpu] 1366x768 yuv420p
for ffmpeg and simplescreenrecorder receptively.
I do not know if this matters but is there something that I could tweak to makeffmpeg
to capture the screen and audio without any lag.
Like answered here https://unix.stackexchange.com/questions/675436/ffmpeg-recording-slows-down-when-audio-inputs-are-added
I do open pavucontrol but its not much of a help.

The reason that I going with
ffmpeg
is because I can kill the process usingpid
at a particular time using cronjobs.
These are my system information, in case if it helps

System:
 Host: taco Kernel: 5.18.12-arch1-1 arch: x86_64 bits: 64 Desktop: dwm
 v: 6.2 Distro: Arch Linux
Machine:
 Type: Desktop Mobo: Acer model: A75F2-M v: P21-A1 serial: N/A BIOS: Acer
 v: P21-A1 date: 02/07/2014
CPU:
 Info: quad core model: AMD A8-5500B APU with Radeon HD Graphics bits: 64
 type: MT MCP cache: L2: 4 MiB
 Speed (MHz): avg: 1400 min/max: 1400/3200 cores: 1: 1400 2: 1400 3: 1400
 4: 1400
Graphics:
 Device-1: AMD Trinity [Radeon HD 7560D] driver: radeon v: kernel
 Display: server: X.Org v: 21.1.4 driver: X: loaded: modesetting
 gpu: radeon resolution: 1366x768~60Hz
 OpenGL: renderer: AMD ARUBA (DRM 2.50.0 / 5.18.12-arch1-1 LLVM 14.0.6)
 v: 4.3 Mesa 22.1.3
Audio:
 Device-1: AMD FCH Azalia driver: snd_hda_intel
 Sound Server-1: ALSA v: k5.18.12-arch1-1 running: yes
 Sound Server-2: PulseAudio v: 16.1 running: yes
 Sound Server-3: PipeWire v: 0.3.56 running: yes



Any help is much appreciated.