Game Development 9 min read

Creating a Photo Mosaic from League of Legends Skin Images Using Python Web Scraping

This tutorial explains how to crawl all League of Legends hero skin images with a Python script, decode the URL pattern, download the assets, and then assemble them into a large photo mosaic using a third‑party mosaic software, providing full code and step‑by‑step instructions.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Creating a Photo Mosaic from League of Legends Skin Images Using Python Web Scraping

The article introduces the concept of a "photo mosaic" where a large image is composed of many smaller pictures, and describes the author's inspiration from a League of Legends (LOL) skin mosaic they saw online.

Origin : The author discovered a mosaic made from nearly a thousand LOL skin screenshots and decided to recreate it by first collecting all skin images.

Crawling approach : Because the LOL skin gallery uses a paging system that does not change the URL, the author inspected the network requests (F12) to find the URL pattern for each skin image. The pattern is https://game.gtimg.cn/images/daoju/app/lol/medium/2-<heroID><skinID>-9.jpg , where heroID identifies the champion and skinID is a three‑digit number (001‑015).

By extracting hero IDs from the JavaScript file champion.js and iterating over possible skin numbers, the script builds a complete list of image URLs.

<code>import requests</code>
<code>import re</code>
<code>import os</code>
<code># title:获取LOL英雄皮肤图像</code>
<code># author:简书 Wayne_Dream</code>
<code># date:2018-7-5</code>
<code>def getHero_data():</code>
<code>    try:</code>
<code>        headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'}</code>
<code>        url = 'http://lol.qq.com/biz/hero/champion.js'</code>
<code>        r = requests.get(url, headers=headers)</code>
<code>        r.raise_for_status()</code>
<code>        r.encoding = r.apparent_encoding</code>
<code>        text = r.text</code>
<code>        hero_id = re.findall(r'"id":"(.*?)","key"', text)</code>
<code>        hero_num = re.findall(r'"key":"(.*?)"', text)</code>
<code>        return hero_id, hero_num</code>
<code>    except:</code>
<code>        return 'Failed to get hero IDs!'</code>
<code>def getUrl(hero_num):</code>
<code>    part1 = 'https://game.gtimg.cn/images/daoju/app/lol/medium/2-'</code>
<code>    part3 = '-9.jpg'</code>
<code>    skin_num = []</code>
<code>    Url_list = []</code>
<code>    for i in range(1, 21):</code>
<code>        i = str(i).zfill(3)</code>
<code>        skin_num.append(i)</code>
<code>    for hn in hero_num:
<code>        for sn in skin_num:
<code>            part2 = hn + sn
<code>            url = part1 + part2 + part3
<code>            Url_list.append(url)
<code>    print('图片URL获取成功')
<code>    return Url_list
<code>def PicName(hero_id, path):
<code>    pic_name_list = []
<code>    for id in hero_id:
<code>        for i in range(1, 21):
<code>            pic_name = path + id + str(i) + '.jpg'
<code>            pic_name_list.append(pic_name)
<code>    return pic_name_list
<code>def DownloadPic(pic_name_list, Url_list):
<code>    count = 0
<code>    n = len(Url_list)
<code>    try:
<code>        for i in range(n):
<code>            headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'}
<code>            res = requests.get(Url_list[i], headers=headers).content
<code>            if len(res) < 100:
<code>                count += 1
<code>                print('\r当前进度:{:.2f}%'.format(100*(count/n)), end='')
<code>            else:
<code>                with open(pic_name_list[i], "wb") as f:
<code>                    f.write(res)
<code>                    count += 1
<code>                    print('\r当前进度:{:.2f}%'.format(100*(count/n)), end='')
<code>    except:
<code>        return 'Failed to download images!'
<code>if __name__ == '__main__':
<code>    print('author:简书 Wayne_Dream:')
<code>    print('https://www.jianshu.com/u/6dd4484b4741')
<code>    input('请输入任意字符开始爬虫:')
<code>    if not os.path.exists('D:\\LOLimg_wayne\\'):
<code>        path = r'D:\\LOLimg_wayne\\'
<code>        os.mkdir(path)
<code>        hero_id, hero_num = getHero_data()
<code>        Url_list = getUrl(hero_num)
<code>        pic_name_list = PicName(hero_id, path)
<code>        print('正在下载图片,请稍等。。。')
<code>        print('在' + path + '下查看...')
<code>        DownloadPic(pic_name_list, Url_list)
<code>        print('图片下载完毕')
<code>    else:
<code>        path = r'D:\\LOLimg_wayne\\'
<code>        hero_id, hero_num = getHero_data()
<code>        Url_list = getUrl(hero_num)
<code>        pic_name_list = PicName(hero_id, path)
<code>        print('正在下载图片,请稍等。。。')
<code>        print('在' + path + '下查看...')
<code>        DownloadPic(pic_name_list, Url_list)
<code>        print('图片下载完毕')

The script creates the destination folder, fetches hero IDs, builds all possible image URLs, generates local filenames, and downloads each image while showing progress.

Mosaic creation : After obtaining the full set of skin pictures, the author uses a free mosaic software (downloadable from https://fmedda.com/en/download) to import the images, select "Create photo mosaic", and generate the final large picture. Screenshots of the software UI and the resulting mosaic are provided.

Finally, the author shares a Baidu Cloud link containing the complete skin image collection for readers who want to skip the crawling step.

Original article link: https://www.jianshu.com/p/c963370cd8df

Pythonautomationweb scrapingImage MosaicLeague of Legends
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.