Backend Development 7 min read

Python Web Project: Visualizing Hot Search Rankings and Domestic COVID‑19 Cases with Flask, Web Scraping, and ECharts

This report describes a Python‑based web application built with Flask that scrapes hot‑search data from Weibo, Baidu and Zhihu, processes it using jieba and other libraries, and visualizes the results together with domestic COVID‑19 statistics using ECharts on a responsive front‑end page.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Python Web Project: Visualizing Hot Search Rankings and Domestic COVID‑19 Cases with Flask, Web Scraping, and ECharts

Experiment Purpose

The experiment integrates basic Python knowledge with third‑party libraries to build a visual hot‑search ranking and domestic COVID‑19 new‑case chart, aiming to improve students' programming, problem‑analysis, and solution skills.

Equipment and Environment

Hardware: multimedia computer. Software: Windows 7/10, Python 3.x.

Experiment Content

1. Use the Flask web framework to create a web project. 2. Apply web‑scraping techniques to obtain hot‑search data from platforms such as Weibo, Baidu, and Zhihu. 3. Use basic Python libraries for data transformation and analysis. 4. Perform word‑frequency analysis on hot‑search terms with the jieba library. 5. Design the front‑end page with jQuery, HTML, CSS, JavaScript, and ECharts.

2. Final output must be clear, aesthetically pleasing, and follow a standardized format.

Experiment Run Process and Analysis

<code># Build the home page route and load index.html with data
@app.route('/Hot_Bot')
def Hot_Bot():
    data = hotBot()
    return render_template('index.html', form=data, title=data.title)

# Build the word‑frequency page route and load test.html with data
@app.route('/cipin')
def cipin():
    data = spider.sum_hot_word()
    print(data)
    return render_template('test.html', form=data)
</code>
<code>def weibo():
    hot = []
    name = []
    value = []
    url = 'https://weibo.com/ajax/statuses/hot_band'
    header = {
        'cookie': 'UOR=...; SINAGLOBAL=...; SUB=...; SUBP=...; WBPSESS=...; ULV=...; XSRF-TOKEN=... ',
        'accept-encoding': 'gzip, deflate, br',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36',
        'referer': 'https://www.baidu.com/...'
    }
    req = requests.get(url, headers=header).text
    soup = BeautifulSoup(req, "lxml")
    hot_word = json.loads(req)['data']['band_list']
    for i in range(len(hot_word)):
        hot.append({"name": hot_word[i]['word'][0:10], "value": hot_word[i]['num']})
        name.append(hot_word[i]['word'])
        value.append(hot_word[i]['num'])
    return hot[0:3], name, value
</code>
<code>def zhihu():
    hot = []
    browser = webdriver.Chrome('chromedriver.exe')
    browser.get('https://www.zhihu.com/topsearch')
    browser.refresh()
    elements = browser.find_elements_by_class_name('TopSearchMain-title')
    for i in elements:
        hot.append(i.text)
    return hot
</code>
<code>&lt;div id="main" style="width: 600px;height: 800px;"&gt;&lt;/div&gt;
&lt;script&gt;
   var ectest = echarts.init(document.getElementById("main"));
   var ec_right2_option = {
       title: {text: "今日疫情热搜", textStyle: {color: 'white'}, left: 'left'},
       tooltip: {show: false},
       series: [{type: 'wordCloud', gridSize: 1, sizeRange: [12,55], rotationRange: [-45,0,45,90],
           textStyle: {normal: {color: function () {return 'rgb(' + Math.round(Math.random()*255) + ', ' + Math.round(Math.random()*255) + ', ' + Math.round(Math.random()*255) + ')';}}},
           data: ddd]
       }];
   ectest.setOption(ec_right2_option);
&lt;/script&gt;
</code>

Running Results

The main page displays a domestic COVID‑19 new‑case chart in the center, Baidu hot‑search top 3 on the upper left, Weibo hot‑search top 3 on the lower left, current weather in the upper middle, and a comparison of hot‑search popularity between Weibo and Baidu on the upper right. The lower right shows hot‑search URLs and word‑frequency statistics for the three platforms.

An additional chart visualizes the word‑frequency statistics for Baidu, Weibo, and Zhihu.

Reflections

The project reinforced Python fundamentals and deepened understanding of front‑end technologies, highlighting the importance of overall architecture in project construction. It also revealed personal weaknesses, such as limited proficiency with jQuery, JavaScript, and ECharts configuration, as well as the need to strengthen mastery of Python basics and the jieba library.

frontendFlaskdata-visualizationechartsweb scrapingjieba
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.